00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 980 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3642 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.138 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.224 Fetching changes from the remote Git repository 00:00:00.226 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.279 Using shallow fetch with depth 1 00:00:00.279 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.279 > git --version # timeout=10 00:00:00.314 > git --version # 'git version 2.39.2' 00:00:00.314 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.335 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.336 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.843 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.854 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.869 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.869 > git config core.sparsecheckout # timeout=10 00:00:07.881 > git read-tree -mu HEAD # timeout=10 00:00:07.898 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.917 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.917 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.047 [Pipeline] Start of Pipeline 00:00:08.059 [Pipeline] library 00:00:08.060 Loading library shm_lib@master 00:00:08.060 Library shm_lib@master is cached. Copying from home. 00:00:08.075 [Pipeline] node 00:00:08.092 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.094 [Pipeline] { 00:00:08.105 [Pipeline] catchError 00:00:08.107 [Pipeline] { 00:00:08.125 [Pipeline] wrap 00:00:08.136 [Pipeline] { 00:00:08.150 [Pipeline] stage 00:00:08.152 [Pipeline] { (Prologue) 00:00:08.177 [Pipeline] echo 00:00:08.179 Node: VM-host-SM38 00:00:08.187 [Pipeline] cleanWs 00:00:08.210 [WS-CLEANUP] Deleting project workspace... 00:00:08.210 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.218 [WS-CLEANUP] done 00:00:08.451 [Pipeline] setCustomBuildProperty 00:00:08.531 [Pipeline] httpRequest 00:00:08.899 [Pipeline] echo 00:00:08.901 Sorcerer 10.211.164.20 is alive 00:00:08.910 [Pipeline] retry 00:00:08.912 [Pipeline] { 00:00:08.926 [Pipeline] httpRequest 00:00:08.931 HttpMethod: GET 00:00:08.932 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.933 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.943 Response Code: HTTP/1.1 200 OK 00:00:08.944 Success: Status code 200 is in the accepted range: 200,404 00:00:08.944 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.457 [Pipeline] } 00:00:13.473 [Pipeline] // retry 00:00:13.480 [Pipeline] sh 00:00:13.763 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.779 [Pipeline] httpRequest 00:00:14.436 [Pipeline] echo 00:00:14.438 Sorcerer 10.211.164.20 is alive 00:00:14.446 [Pipeline] retry 00:00:14.448 [Pipeline] { 00:00:14.461 [Pipeline] httpRequest 00:00:14.466 HttpMethod: GET 00:00:14.467 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:14.467 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:14.490 Response Code: HTTP/1.1 200 OK 00:00:14.491 Success: Status code 200 is in the accepted range: 200,404 00:00:14.491 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:46.525 [Pipeline] } 00:01:46.543 [Pipeline] // retry 00:01:46.553 [Pipeline] sh 00:01:46.839 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:50.151 [Pipeline] sh 00:01:50.437 + git -C spdk log --oneline -n5 00:01:50.437 d47eb51c9 bdev: fix a race between reset start and complete 00:01:50.437 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:50.437 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:50.437 4bcab9fb9 correct kick for CQ full case 00:01:50.437 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:50.457 [Pipeline] withCredentials 00:01:50.468 > git --version # timeout=10 00:01:50.482 > git --version # 'git version 2.39.2' 00:01:50.499 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:50.502 [Pipeline] { 00:01:50.511 [Pipeline] retry 00:01:50.513 [Pipeline] { 00:01:50.529 [Pipeline] sh 00:01:50.809 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:50.822 [Pipeline] } 00:01:50.841 [Pipeline] // retry 00:01:50.846 [Pipeline] } 00:01:50.864 [Pipeline] // withCredentials 00:01:50.873 [Pipeline] httpRequest 00:01:51.236 [Pipeline] echo 00:01:51.238 Sorcerer 10.211.164.20 is alive 00:01:51.248 [Pipeline] retry 00:01:51.250 [Pipeline] { 00:01:51.264 [Pipeline] httpRequest 00:01:51.269 HttpMethod: GET 00:01:51.270 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:51.270 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:51.272 Response Code: HTTP/1.1 200 OK 00:01:51.272 Success: Status code 200 is in the accepted range: 200,404 00:01:51.273 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:57.146 [Pipeline] } 00:01:57.163 [Pipeline] // retry 00:01:57.170 [Pipeline] sh 00:01:57.457 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:58.857 [Pipeline] sh 00:01:59.139 + git -C dpdk log --oneline -n5 00:01:59.139 eeb0605f11 version: 23.11.0 00:01:59.139 238778122a doc: update release notes for 23.11 00:01:59.139 46aa6b3cfc doc: fix description of RSS features 00:01:59.139 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:59.139 7e421ae345 devtools: support skipping forbid rule check 00:01:59.157 [Pipeline] writeFile 00:01:59.172 [Pipeline] sh 00:01:59.457 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:59.471 [Pipeline] sh 00:01:59.757 + cat autorun-spdk.conf 00:01:59.757 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.757 SPDK_TEST_NVME=1 00:01:59.757 SPDK_TEST_FTL=1 00:01:59.757 SPDK_TEST_ISAL=1 00:01:59.757 SPDK_RUN_ASAN=1 00:01:59.757 SPDK_RUN_UBSAN=1 00:01:59.757 SPDK_TEST_XNVME=1 00:01:59.757 SPDK_TEST_NVME_FDP=1 00:01:59.757 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:59.757 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.757 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.766 RUN_NIGHTLY=1 00:01:59.768 [Pipeline] } 00:01:59.782 [Pipeline] // stage 00:01:59.796 [Pipeline] stage 00:01:59.798 [Pipeline] { (Run VM) 00:01:59.810 [Pipeline] sh 00:02:00.096 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:00.096 + echo 'Start stage prepare_nvme.sh' 00:02:00.096 Start stage prepare_nvme.sh 00:02:00.096 + [[ -n 7 ]] 00:02:00.096 + disk_prefix=ex7 00:02:00.096 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:00.096 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:00.096 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:00.096 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.096 ++ SPDK_TEST_NVME=1 00:02:00.096 ++ SPDK_TEST_FTL=1 00:02:00.096 ++ SPDK_TEST_ISAL=1 00:02:00.096 ++ SPDK_RUN_ASAN=1 00:02:00.096 ++ SPDK_RUN_UBSAN=1 00:02:00.096 ++ SPDK_TEST_XNVME=1 00:02:00.096 ++ SPDK_TEST_NVME_FDP=1 00:02:00.096 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:00.096 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:00.096 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.096 ++ RUN_NIGHTLY=1 00:02:00.096 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:00.096 + nvme_files=() 00:02:00.096 + declare -A nvme_files 00:02:00.096 + backend_dir=/var/lib/libvirt/images/backends 00:02:00.096 + nvme_files['nvme.img']=5G 00:02:00.096 + nvme_files['nvme-cmb.img']=5G 00:02:00.096 + nvme_files['nvme-multi0.img']=4G 00:02:00.096 + nvme_files['nvme-multi1.img']=4G 00:02:00.096 + nvme_files['nvme-multi2.img']=4G 00:02:00.096 + nvme_files['nvme-openstack.img']=8G 00:02:00.096 + nvme_files['nvme-zns.img']=5G 00:02:00.096 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:00.096 + (( SPDK_TEST_FTL == 1 )) 00:02:00.096 + nvme_files["nvme-ftl.img"]=6G 00:02:00.096 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:00.097 + nvme_files["nvme-fdp.img"]=1G 00:02:00.097 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:00.097 + for nvme in "${!nvme_files[@]}" 00:02:00.097 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:00.358 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.358 + for nvme in "${!nvme_files[@]}" 00:02:00.358 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:02:01.303 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:01.303 + for nvme in "${!nvme_files[@]}" 00:02:01.303 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:01.303 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.303 + for nvme in "${!nvme_files[@]}" 00:02:01.303 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:01.303 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:01.303 + for nvme in "${!nvme_files[@]}" 00:02:01.303 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:01.303 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.303 + for nvme in "${!nvme_files[@]}" 00:02:01.303 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:01.565 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.565 + for nvme in "${!nvme_files[@]}" 00:02:01.565 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:02.139 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:02.139 + for nvme in "${!nvme_files[@]}" 00:02:02.139 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:02:02.403 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:02.403 + for nvme in "${!nvme_files[@]}" 00:02:02.403 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:02.977 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.977 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:02.977 + echo 'End stage prepare_nvme.sh' 00:02:02.977 End stage prepare_nvme.sh 00:02:02.991 [Pipeline] sh 00:02:03.275 + DISTRO=fedora39 00:02:03.275 + CPUS=10 00:02:03.275 + RAM=12288 00:02:03.275 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:03.275 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:03.275 00:02:03.275 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:03.275 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:03.275 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:03.275 HELP=0 00:02:03.275 DRY_RUN=0 00:02:03.275 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:02:03.275 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:03.275 NVME_AUTO_CREATE=0 00:02:03.276 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:02:03.276 NVME_CMB=,,,, 00:02:03.276 NVME_PMR=,,,, 00:02:03.276 NVME_ZNS=,,,, 00:02:03.276 NVME_MS=true,,,, 00:02:03.276 NVME_FDP=,,,on, 00:02:03.276 SPDK_VAGRANT_DISTRO=fedora39 00:02:03.276 SPDK_VAGRANT_VMCPU=10 00:02:03.276 SPDK_VAGRANT_VMRAM=12288 00:02:03.276 SPDK_VAGRANT_PROVIDER=libvirt 00:02:03.276 SPDK_VAGRANT_HTTP_PROXY= 00:02:03.276 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:03.276 SPDK_OPENSTACK_NETWORK=0 00:02:03.276 VAGRANT_PACKAGE_BOX=0 00:02:03.276 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:03.276 FORCE_DISTRO=true 00:02:03.276 VAGRANT_BOX_VERSION= 00:02:03.276 EXTRA_VAGRANTFILES= 00:02:03.276 NIC_MODEL=e1000 00:02:03.276 00:02:03.276 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:03.276 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:05.818 Bringing machine 'default' up with 'libvirt' provider... 00:02:06.077 ==> default: Creating image (snapshot of base box volume). 00:02:06.338 ==> default: Creating domain with the following settings... 00:02:06.338 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731935702_502371262ee7c4d1ed30 00:02:06.338 ==> default: -- Domain type: kvm 00:02:06.338 ==> default: -- Cpus: 10 00:02:06.338 ==> default: -- Feature: acpi 00:02:06.338 ==> default: -- Feature: apic 00:02:06.338 ==> default: -- Feature: pae 00:02:06.338 ==> default: -- Memory: 12288M 00:02:06.338 ==> default: -- Memory Backing: hugepages: 00:02:06.338 ==> default: -- Management MAC: 00:02:06.338 ==> default: -- Loader: 00:02:06.338 ==> default: -- Nvram: 00:02:06.338 ==> default: -- Base box: spdk/fedora39 00:02:06.338 ==> default: -- Storage pool: default 00:02:06.338 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731935702_502371262ee7c4d1ed30.img (20G) 00:02:06.338 ==> default: -- Volume Cache: default 00:02:06.338 ==> default: -- Kernel: 00:02:06.338 ==> default: -- Initrd: 00:02:06.338 ==> default: -- Graphics Type: vnc 00:02:06.338 ==> default: -- Graphics Port: -1 00:02:06.338 ==> default: -- Graphics IP: 127.0.0.1 00:02:06.338 ==> default: -- Graphics Password: Not defined 00:02:06.338 ==> default: -- Video Type: cirrus 00:02:06.338 ==> default: -- Video VRAM: 9216 00:02:06.338 ==> default: -- Sound Type: 00:02:06.338 ==> default: -- Keymap: en-us 00:02:06.338 ==> default: -- TPM Path: 00:02:06.338 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:06.338 ==> default: -- Command line args: 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:06.338 ==> default: -> value=-drive, 00:02:06.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:06.338 ==> default: -> value=-drive, 00:02:06.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:06.338 ==> default: -> value=-drive, 00:02:06.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.338 ==> default: -> value=-drive, 00:02:06.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.338 ==> default: -> value=-drive, 00:02:06.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:06.338 ==> default: -> value=-drive, 00:02:06.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:06.338 ==> default: -> value=-device, 00:02:06.338 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.599 ==> default: Creating shared folders metadata... 00:02:06.599 ==> default: Starting domain. 00:02:08.539 ==> default: Waiting for domain to get an IP address... 00:02:26.655 ==> default: Waiting for SSH to become available... 00:02:26.655 ==> default: Configuring and enabling network interfaces... 00:02:29.204 default: SSH address: 192.168.121.158:22 00:02:29.204 default: SSH username: vagrant 00:02:29.204 default: SSH auth method: private key 00:02:31.122 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:39.270 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:44.623 ==> default: Mounting SSHFS shared folder... 00:02:46.542 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:46.542 ==> default: Checking Mount.. 00:02:47.928 ==> default: Folder Successfully Mounted! 00:02:47.928 00:02:47.928 SUCCESS! 00:02:47.928 00:02:47.928 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:47.928 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:47.928 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:47.928 00:02:47.939 [Pipeline] } 00:02:47.950 [Pipeline] // stage 00:02:47.957 [Pipeline] dir 00:02:47.958 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:47.959 [Pipeline] { 00:02:47.971 [Pipeline] catchError 00:02:47.973 [Pipeline] { 00:02:47.985 [Pipeline] sh 00:02:48.269 + vagrant ssh-config --host vagrant 00:02:48.269 + sed -ne '/^Host/,$p' 00:02:48.269 + tee ssh_conf 00:02:50.814 Host vagrant 00:02:50.814 HostName 192.168.121.158 00:02:50.814 User vagrant 00:02:50.814 Port 22 00:02:50.814 UserKnownHostsFile /dev/null 00:02:50.814 StrictHostKeyChecking no 00:02:50.814 PasswordAuthentication no 00:02:50.814 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:50.814 IdentitiesOnly yes 00:02:50.814 LogLevel FATAL 00:02:50.814 ForwardAgent yes 00:02:50.814 ForwardX11 yes 00:02:50.814 00:02:50.829 [Pipeline] withEnv 00:02:50.832 [Pipeline] { 00:02:50.845 [Pipeline] sh 00:02:51.130 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:51.130 source /etc/os-release 00:02:51.130 [[ -e /image.version ]] && img=$(< /image.version) 00:02:51.130 # Minimal, systemd-like check. 00:02:51.130 if [[ -e /.dockerenv ]]; then 00:02:51.130 # Clear garbage from the node'\''s name: 00:02:51.130 # agt-er_autotest_547-896 -> autotest_547-896 00:02:51.130 # $HOSTNAME is the actual container id 00:02:51.130 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:51.130 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:51.130 # We can assume this is a mount from a host where container is running, 00:02:51.130 # so fetch its hostname to easily identify the target swarm worker. 00:02:51.130 container="$(< /etc/hostname) ($agent)" 00:02:51.130 else 00:02:51.130 # Fallback 00:02:51.130 container=$agent 00:02:51.130 fi 00:02:51.130 fi 00:02:51.130 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:51.130 ' 00:02:51.404 [Pipeline] } 00:02:51.422 [Pipeline] // withEnv 00:02:51.431 [Pipeline] setCustomBuildProperty 00:02:51.446 [Pipeline] stage 00:02:51.449 [Pipeline] { (Tests) 00:02:51.466 [Pipeline] sh 00:02:51.747 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:52.021 [Pipeline] sh 00:02:52.419 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:52.691 [Pipeline] timeout 00:02:52.691 Timeout set to expire in 50 min 00:02:52.692 [Pipeline] { 00:02:52.762 [Pipeline] sh 00:02:53.046 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:53.618 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:02:53.633 [Pipeline] sh 00:02:53.917 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:54.192 [Pipeline] sh 00:02:54.475 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:54.754 [Pipeline] sh 00:02:55.038 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:55.038 ++ readlink -f spdk_repo 00:02:55.300 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:55.300 + [[ -n /home/vagrant/spdk_repo ]] 00:02:55.300 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:55.300 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:55.300 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:55.300 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:55.300 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:55.300 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:55.300 + cd /home/vagrant/spdk_repo 00:02:55.300 + source /etc/os-release 00:02:55.300 ++ NAME='Fedora Linux' 00:02:55.300 ++ VERSION='39 (Cloud Edition)' 00:02:55.300 ++ ID=fedora 00:02:55.300 ++ VERSION_ID=39 00:02:55.300 ++ VERSION_CODENAME= 00:02:55.300 ++ PLATFORM_ID=platform:f39 00:02:55.300 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:55.300 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:55.300 ++ LOGO=fedora-logo-icon 00:02:55.300 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:55.300 ++ HOME_URL=https://fedoraproject.org/ 00:02:55.300 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:55.300 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:55.300 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:55.300 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:55.300 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:55.300 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:55.300 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:55.300 ++ SUPPORT_END=2024-11-12 00:02:55.300 ++ VARIANT='Cloud Edition' 00:02:55.300 ++ VARIANT_ID=cloud 00:02:55.300 + uname -a 00:02:55.300 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:55.300 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:55.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:55.823 Hugepages 00:02:55.823 node hugesize free / total 00:02:55.823 node0 1048576kB 0 / 0 00:02:55.823 node0 2048kB 0 / 0 00:02:55.823 00:02:55.823 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:55.823 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:55.823 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:55.823 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:55.823 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:55.823 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:55.823 + rm -f /tmp/spdk-ld-path 00:02:55.823 + source autorun-spdk.conf 00:02:55.823 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:55.823 ++ SPDK_TEST_NVME=1 00:02:55.823 ++ SPDK_TEST_FTL=1 00:02:55.823 ++ SPDK_TEST_ISAL=1 00:02:55.823 ++ SPDK_RUN_ASAN=1 00:02:55.823 ++ SPDK_RUN_UBSAN=1 00:02:55.823 ++ SPDK_TEST_XNVME=1 00:02:55.823 ++ SPDK_TEST_NVME_FDP=1 00:02:55.823 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:55.823 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:55.823 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:55.823 ++ RUN_NIGHTLY=1 00:02:55.823 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:55.823 + [[ -n '' ]] 00:02:55.823 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:55.823 + for M in /var/spdk/build-*-manifest.txt 00:02:55.823 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:55.823 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.085 + for M in /var/spdk/build-*-manifest.txt 00:02:56.085 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:56.085 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.085 + for M in /var/spdk/build-*-manifest.txt 00:02:56.085 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:56.085 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.085 ++ uname 00:02:56.085 + [[ Linux == \L\i\n\u\x ]] 00:02:56.085 + sudo dmesg -T 00:02:56.085 + sudo dmesg --clear 00:02:56.085 + dmesg_pid=5782 00:02:56.085 + [[ Fedora Linux == FreeBSD ]] 00:02:56.085 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.085 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.086 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:56.086 + [[ -x /usr/src/fio-static/fio ]] 00:02:56.086 + sudo dmesg -Tw 00:02:56.086 + export FIO_BIN=/usr/src/fio-static/fio 00:02:56.086 + FIO_BIN=/usr/src/fio-static/fio 00:02:56.086 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:56.086 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:56.086 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:56.086 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.086 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.086 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:56.086 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.086 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.086 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.086 13:15:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.086 13:15:52 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.086 13:15:52 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:56.086 13:15:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:56.086 13:15:52 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.086 13:15:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.086 13:15:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:56.086 13:15:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:56.086 13:15:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:56.086 13:15:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.086 13:15:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.086 13:15:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.086 13:15:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.086 13:15:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.086 13:15:52 -- paths/export.sh@5 -- $ export PATH 00:02:56.086 13:15:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.086 13:15:52 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:56.086 13:15:52 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:56.086 13:15:52 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731935752.XXXXXX 00:02:56.086 13:15:52 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731935752.N9VAW4 00:02:56.086 13:15:52 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:56.086 13:15:52 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:02:56.086 13:15:52 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:56.086 13:15:52 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:56.086 13:15:52 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:56.086 13:15:52 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:56.086 13:15:52 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:56.086 13:15:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:56.086 13:15:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.086 13:15:52 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:56.086 13:15:52 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:56.086 13:15:52 -- pm/common@17 -- $ local monitor 00:02:56.086 13:15:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.086 13:15:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.086 13:15:52 -- pm/common@25 -- $ sleep 1 00:02:56.086 13:15:52 -- pm/common@21 -- $ date +%s 00:02:56.086 13:15:52 -- pm/common@21 -- $ date +%s 00:02:56.086 13:15:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731935752 00:02:56.086 13:15:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731935752 00:02:56.348 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731935752_collect-cpu-load.pm.log 00:02:56.348 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731935752_collect-vmstat.pm.log 00:02:57.292 13:15:53 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:57.292 13:15:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:57.292 13:15:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:57.292 13:15:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:57.292 13:15:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:57.292 Mon Nov 18 01:15:53 PM UTC 2024 00:02:57.292 13:15:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:57.292 v25.01-pre-190-gd47eb51c9 00:02:57.292 13:15:53 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:57.292 13:15:53 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:57.292 13:15:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.292 13:15:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.292 13:15:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.292 ************************************ 00:02:57.292 START TEST asan 00:02:57.292 ************************************ 00:02:57.292 using asan 00:02:57.292 13:15:53 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:57.292 00:02:57.292 real 0m0.000s 00:02:57.292 user 0m0.000s 00:02:57.292 sys 0m0.000s 00:02:57.292 ************************************ 00:02:57.292 END TEST asan 00:02:57.292 ************************************ 00:02:57.292 13:15:53 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.292 13:15:53 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.292 13:15:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:57.292 13:15:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:57.292 13:15:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.292 13:15:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.292 13:15:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.292 ************************************ 00:02:57.292 START TEST ubsan 00:02:57.292 ************************************ 00:02:57.292 using ubsan 00:02:57.292 ************************************ 00:02:57.292 13:15:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:57.292 00:02:57.292 real 0m0.000s 00:02:57.292 user 0m0.000s 00:02:57.292 sys 0m0.000s 00:02:57.292 13:15:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.292 13:15:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.292 END TEST ubsan 00:02:57.292 ************************************ 00:02:57.292 13:15:53 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:57.292 13:15:53 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:57.292 13:15:53 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:57.292 13:15:53 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:57.292 13:15:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.292 13:15:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.292 ************************************ 00:02:57.292 START TEST build_native_dpdk 00:02:57.292 ************************************ 00:02:57.292 13:15:53 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:57.292 eeb0605f11 version: 23.11.0 00:02:57.292 238778122a doc: update release notes for 23.11 00:02:57.292 46aa6b3cfc doc: fix description of RSS features 00:02:57.292 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:57.292 7e421ae345 devtools: support skipping forbid rule check 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.292 13:15:53 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:57.292 patching file config/rte_config.h 00:02:57.292 Hunk #1 succeeded at 60 (offset 1 line). 00:02:57.292 13:15:53 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:57.293 13:15:53 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:57.293 patching file lib/pcapng/rte_pcapng.c 00:02:57.293 13:15:53 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:57.293 13:15:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:57.555 13:15:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:57.555 13:15:53 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:57.555 13:15:53 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:57.555 13:15:53 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:57.555 13:15:53 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:57.555 13:15:53 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:01.763 The Meson build system 00:03:01.763 Version: 1.5.0 00:03:01.763 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:01.763 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:01.763 Build type: native build 00:03:01.763 Program cat found: YES (/usr/bin/cat) 00:03:01.763 Project name: DPDK 00:03:01.763 Project version: 23.11.0 00:03:01.763 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:01.763 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:01.763 Host machine cpu family: x86_64 00:03:01.763 Host machine cpu: x86_64 00:03:01.763 Message: ## Building in Developer Mode ## 00:03:01.763 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.763 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:01.763 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.763 Program python3 found: YES (/usr/bin/python3) 00:03:01.763 Program cat found: YES (/usr/bin/cat) 00:03:01.763 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:01.763 Compiler for C supports arguments -march=native: YES 00:03:01.763 Checking for size of "void *" : 8 00:03:01.763 Checking for size of "void *" : 8 (cached) 00:03:01.763 Library m found: YES 00:03:01.763 Library numa found: YES 00:03:01.763 Has header "numaif.h" : YES 00:03:01.763 Library fdt found: NO 00:03:01.763 Library execinfo found: NO 00:03:01.763 Has header "execinfo.h" : YES 00:03:01.763 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:01.763 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.763 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.763 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.763 Run-time dependency openssl found: YES 3.1.1 00:03:01.763 Run-time dependency libpcap found: YES 1.10.4 00:03:01.763 Has header "pcap.h" with dependency libpcap: YES 00:03:01.763 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.763 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.763 Compiler for C supports arguments -Wformat: YES 00:03:01.763 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.763 Compiler for C supports arguments -Wformat-security: NO 00:03:01.763 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.763 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.763 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.763 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.763 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.763 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.763 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.763 Compiler for C supports arguments -Wundef: YES 00:03:01.763 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.763 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.763 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.763 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.763 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.763 Program objdump found: YES (/usr/bin/objdump) 00:03:01.763 Compiler for C supports arguments -mavx512f: YES 00:03:01.763 Checking if "AVX512 checking" compiles: YES 00:03:01.763 Fetching value of define "__SSE4_2__" : 1 00:03:01.763 Fetching value of define "__AES__" : 1 00:03:01.763 Fetching value of define "__AVX__" : 1 00:03:01.763 Fetching value of define "__AVX2__" : 1 00:03:01.763 Fetching value of define "__AVX512BW__" : 1 00:03:01.763 Fetching value of define "__AVX512CD__" : 1 00:03:01.763 Fetching value of define "__AVX512DQ__" : 1 00:03:01.763 Fetching value of define "__AVX512F__" : 1 00:03:01.763 Fetching value of define "__AVX512VL__" : 1 00:03:01.763 Fetching value of define "__PCLMUL__" : 1 00:03:01.763 Fetching value of define "__RDRND__" : 1 00:03:01.763 Fetching value of define "__RDSEED__" : 1 00:03:01.763 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:01.763 Fetching value of define "__znver1__" : (undefined) 00:03:01.763 Fetching value of define "__znver2__" : (undefined) 00:03:01.763 Fetching value of define "__znver3__" : (undefined) 00:03:01.763 Fetching value of define "__znver4__" : (undefined) 00:03:01.763 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.763 Message: lib/log: Defining dependency "log" 00:03:01.763 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.763 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.763 Checking for function "getentropy" : NO 00:03:01.763 Message: lib/eal: Defining dependency "eal" 00:03:01.763 Message: lib/ring: Defining dependency "ring" 00:03:01.763 Message: lib/rcu: Defining dependency "rcu" 00:03:01.763 Message: lib/mempool: Defining dependency "mempool" 00:03:01.763 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.763 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:01.763 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:01.763 Compiler for C supports arguments -mpclmul: YES 00:03:01.763 Compiler for C supports arguments -maes: YES 00:03:01.763 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.763 Compiler for C supports arguments -mavx512bw: YES 00:03:01.763 Compiler for C supports arguments -mavx512dq: YES 00:03:01.763 Compiler for C supports arguments -mavx512vl: YES 00:03:01.763 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.763 Compiler for C supports arguments -mavx2: YES 00:03:01.763 Compiler for C supports arguments -mavx: YES 00:03:01.763 Message: lib/net: Defining dependency "net" 00:03:01.763 Message: lib/meter: Defining dependency "meter" 00:03:01.763 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.763 Message: lib/pci: Defining dependency "pci" 00:03:01.763 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.763 Message: lib/metrics: Defining dependency "metrics" 00:03:01.763 Message: lib/hash: Defining dependency "hash" 00:03:01.763 Message: lib/timer: Defining dependency "timer" 00:03:01.763 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.763 Message: lib/acl: Defining dependency "acl" 00:03:01.763 Message: lib/bbdev: Defining dependency "bbdev" 00:03:01.763 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:01.763 Run-time dependency libelf found: YES 0.191 00:03:01.763 Message: lib/bpf: Defining dependency "bpf" 00:03:01.763 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:01.763 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.763 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.763 Message: lib/distributor: Defining dependency "distributor" 00:03:01.763 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.763 Message: lib/efd: Defining dependency "efd" 00:03:01.763 Message: lib/eventdev: Defining dependency "eventdev" 00:03:01.763 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:01.763 Message: lib/gpudev: Defining dependency "gpudev" 00:03:01.763 Message: lib/gro: Defining dependency "gro" 00:03:01.763 Message: lib/gso: Defining dependency "gso" 00:03:01.763 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:01.763 Message: lib/jobstats: Defining dependency "jobstats" 00:03:01.763 Message: lib/latencystats: Defining dependency "latencystats" 00:03:01.763 Message: lib/lpm: Defining dependency "lpm" 00:03:01.763 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.763 Fetching value of define "__AVX512IFMA__" : 1 00:03:01.764 Message: lib/member: Defining dependency "member" 00:03:01.764 Message: lib/pcapng: Defining dependency "pcapng" 00:03:01.764 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.764 Message: lib/power: Defining dependency "power" 00:03:01.764 Message: lib/rawdev: Defining dependency "rawdev" 00:03:01.764 Message: lib/regexdev: Defining dependency "regexdev" 00:03:01.764 Message: lib/mldev: Defining dependency "mldev" 00:03:01.764 Message: lib/rib: Defining dependency "rib" 00:03:01.764 Message: lib/reorder: Defining dependency "reorder" 00:03:01.764 Message: lib/sched: Defining dependency "sched" 00:03:01.764 Message: lib/security: Defining dependency "security" 00:03:01.764 Message: lib/stack: Defining dependency "stack" 00:03:01.764 Has header "linux/userfaultfd.h" : YES 00:03:01.764 Has header "linux/vduse.h" : YES 00:03:01.764 Message: lib/vhost: Defining dependency "vhost" 00:03:01.764 Message: lib/ipsec: Defining dependency "ipsec" 00:03:01.764 Message: lib/pdcp: Defining dependency "pdcp" 00:03:01.764 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.764 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.764 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.764 Message: lib/fib: Defining dependency "fib" 00:03:01.764 Message: lib/port: Defining dependency "port" 00:03:01.764 Message: lib/pdump: Defining dependency "pdump" 00:03:01.764 Message: lib/table: Defining dependency "table" 00:03:01.764 Message: lib/pipeline: Defining dependency "pipeline" 00:03:01.764 Message: lib/graph: Defining dependency "graph" 00:03:01.764 Message: lib/node: Defining dependency "node" 00:03:01.764 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.764 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.764 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.764 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:03.149 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:03.149 Compiler for C supports arguments -Wno-unused-value: YES 00:03:03.149 Compiler for C supports arguments -Wno-format: YES 00:03:03.149 Compiler for C supports arguments -Wno-format-security: YES 00:03:03.149 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:03.149 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:03.149 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:03.149 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:03.149 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.149 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.149 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:03.149 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:03.149 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:03.149 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:03.149 Has header "sys/epoll.h" : YES 00:03:03.149 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:03.149 Configuring doxy-api-html.conf using configuration 00:03:03.149 Configuring doxy-api-man.conf using configuration 00:03:03.149 Program mandb found: YES (/usr/bin/mandb) 00:03:03.149 Program sphinx-build found: NO 00:03:03.149 Configuring rte_build_config.h using configuration 00:03:03.149 Message: 00:03:03.149 ================= 00:03:03.149 Applications Enabled 00:03:03.149 ================= 00:03:03.149 00:03:03.149 apps: 00:03:03.149 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:03.149 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:03.150 test-pmd, test-regex, test-sad, test-security-perf, 00:03:03.150 00:03:03.150 Message: 00:03:03.150 ================= 00:03:03.150 Libraries Enabled 00:03:03.150 ================= 00:03:03.150 00:03:03.150 libs: 00:03:03.150 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:03.150 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:03.150 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:03.150 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:03.150 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:03.150 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:03.150 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:03.150 00:03:03.150 00:03:03.150 Message: 00:03:03.150 =============== 00:03:03.150 Drivers Enabled 00:03:03.150 =============== 00:03:03.150 00:03:03.150 common: 00:03:03.150 00:03:03.150 bus: 00:03:03.150 pci, vdev, 00:03:03.150 mempool: 00:03:03.150 ring, 00:03:03.150 dma: 00:03:03.150 00:03:03.150 net: 00:03:03.150 i40e, 00:03:03.150 raw: 00:03:03.150 00:03:03.150 crypto: 00:03:03.150 00:03:03.150 compress: 00:03:03.150 00:03:03.150 regex: 00:03:03.150 00:03:03.150 ml: 00:03:03.150 00:03:03.150 vdpa: 00:03:03.150 00:03:03.150 event: 00:03:03.150 00:03:03.150 baseband: 00:03:03.150 00:03:03.150 gpu: 00:03:03.150 00:03:03.150 00:03:03.150 Message: 00:03:03.150 ================= 00:03:03.150 Content Skipped 00:03:03.150 ================= 00:03:03.150 00:03:03.150 apps: 00:03:03.150 00:03:03.150 libs: 00:03:03.150 00:03:03.150 drivers: 00:03:03.150 common/cpt: not in enabled drivers build config 00:03:03.150 common/dpaax: not in enabled drivers build config 00:03:03.150 common/iavf: not in enabled drivers build config 00:03:03.150 common/idpf: not in enabled drivers build config 00:03:03.150 common/mvep: not in enabled drivers build config 00:03:03.150 common/octeontx: not in enabled drivers build config 00:03:03.150 bus/auxiliary: not in enabled drivers build config 00:03:03.150 bus/cdx: not in enabled drivers build config 00:03:03.150 bus/dpaa: not in enabled drivers build config 00:03:03.150 bus/fslmc: not in enabled drivers build config 00:03:03.150 bus/ifpga: not in enabled drivers build config 00:03:03.150 bus/platform: not in enabled drivers build config 00:03:03.150 bus/vmbus: not in enabled drivers build config 00:03:03.150 common/cnxk: not in enabled drivers build config 00:03:03.150 common/mlx5: not in enabled drivers build config 00:03:03.150 common/nfp: not in enabled drivers build config 00:03:03.150 common/qat: not in enabled drivers build config 00:03:03.150 common/sfc_efx: not in enabled drivers build config 00:03:03.150 mempool/bucket: not in enabled drivers build config 00:03:03.150 mempool/cnxk: not in enabled drivers build config 00:03:03.150 mempool/dpaa: not in enabled drivers build config 00:03:03.150 mempool/dpaa2: not in enabled drivers build config 00:03:03.150 mempool/octeontx: not in enabled drivers build config 00:03:03.150 mempool/stack: not in enabled drivers build config 00:03:03.150 dma/cnxk: not in enabled drivers build config 00:03:03.150 dma/dpaa: not in enabled drivers build config 00:03:03.150 dma/dpaa2: not in enabled drivers build config 00:03:03.150 dma/hisilicon: not in enabled drivers build config 00:03:03.150 dma/idxd: not in enabled drivers build config 00:03:03.150 dma/ioat: not in enabled drivers build config 00:03:03.150 dma/skeleton: not in enabled drivers build config 00:03:03.150 net/af_packet: not in enabled drivers build config 00:03:03.150 net/af_xdp: not in enabled drivers build config 00:03:03.150 net/ark: not in enabled drivers build config 00:03:03.150 net/atlantic: not in enabled drivers build config 00:03:03.150 net/avp: not in enabled drivers build config 00:03:03.150 net/axgbe: not in enabled drivers build config 00:03:03.150 net/bnx2x: not in enabled drivers build config 00:03:03.150 net/bnxt: not in enabled drivers build config 00:03:03.150 net/bonding: not in enabled drivers build config 00:03:03.150 net/cnxk: not in enabled drivers build config 00:03:03.150 net/cpfl: not in enabled drivers build config 00:03:03.150 net/cxgbe: not in enabled drivers build config 00:03:03.150 net/dpaa: not in enabled drivers build config 00:03:03.150 net/dpaa2: not in enabled drivers build config 00:03:03.150 net/e1000: not in enabled drivers build config 00:03:03.150 net/ena: not in enabled drivers build config 00:03:03.150 net/enetc: not in enabled drivers build config 00:03:03.150 net/enetfec: not in enabled drivers build config 00:03:03.150 net/enic: not in enabled drivers build config 00:03:03.150 net/failsafe: not in enabled drivers build config 00:03:03.150 net/fm10k: not in enabled drivers build config 00:03:03.150 net/gve: not in enabled drivers build config 00:03:03.150 net/hinic: not in enabled drivers build config 00:03:03.150 net/hns3: not in enabled drivers build config 00:03:03.150 net/iavf: not in enabled drivers build config 00:03:03.150 net/ice: not in enabled drivers build config 00:03:03.150 net/idpf: not in enabled drivers build config 00:03:03.150 net/igc: not in enabled drivers build config 00:03:03.150 net/ionic: not in enabled drivers build config 00:03:03.150 net/ipn3ke: not in enabled drivers build config 00:03:03.150 net/ixgbe: not in enabled drivers build config 00:03:03.150 net/mana: not in enabled drivers build config 00:03:03.150 net/memif: not in enabled drivers build config 00:03:03.150 net/mlx4: not in enabled drivers build config 00:03:03.150 net/mlx5: not in enabled drivers build config 00:03:03.150 net/mvneta: not in enabled drivers build config 00:03:03.150 net/mvpp2: not in enabled drivers build config 00:03:03.150 net/netvsc: not in enabled drivers build config 00:03:03.150 net/nfb: not in enabled drivers build config 00:03:03.150 net/nfp: not in enabled drivers build config 00:03:03.150 net/ngbe: not in enabled drivers build config 00:03:03.150 net/null: not in enabled drivers build config 00:03:03.150 net/octeontx: not in enabled drivers build config 00:03:03.150 net/octeon_ep: not in enabled drivers build config 00:03:03.150 net/pcap: not in enabled drivers build config 00:03:03.150 net/pfe: not in enabled drivers build config 00:03:03.150 net/qede: not in enabled drivers build config 00:03:03.150 net/ring: not in enabled drivers build config 00:03:03.150 net/sfc: not in enabled drivers build config 00:03:03.150 net/softnic: not in enabled drivers build config 00:03:03.150 net/tap: not in enabled drivers build config 00:03:03.150 net/thunderx: not in enabled drivers build config 00:03:03.150 net/txgbe: not in enabled drivers build config 00:03:03.150 net/vdev_netvsc: not in enabled drivers build config 00:03:03.150 net/vhost: not in enabled drivers build config 00:03:03.150 net/virtio: not in enabled drivers build config 00:03:03.150 net/vmxnet3: not in enabled drivers build config 00:03:03.150 raw/cnxk_bphy: not in enabled drivers build config 00:03:03.150 raw/cnxk_gpio: not in enabled drivers build config 00:03:03.150 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:03.150 raw/ifpga: not in enabled drivers build config 00:03:03.150 raw/ntb: not in enabled drivers build config 00:03:03.150 raw/skeleton: not in enabled drivers build config 00:03:03.150 crypto/armv8: not in enabled drivers build config 00:03:03.150 crypto/bcmfs: not in enabled drivers build config 00:03:03.150 crypto/caam_jr: not in enabled drivers build config 00:03:03.150 crypto/ccp: not in enabled drivers build config 00:03:03.150 crypto/cnxk: not in enabled drivers build config 00:03:03.150 crypto/dpaa_sec: not in enabled drivers build config 00:03:03.150 crypto/dpaa2_sec: not in enabled drivers build config 00:03:03.150 crypto/ipsec_mb: not in enabled drivers build config 00:03:03.150 crypto/mlx5: not in enabled drivers build config 00:03:03.150 crypto/mvsam: not in enabled drivers build config 00:03:03.150 crypto/nitrox: not in enabled drivers build config 00:03:03.150 crypto/null: not in enabled drivers build config 00:03:03.150 crypto/octeontx: not in enabled drivers build config 00:03:03.150 crypto/openssl: not in enabled drivers build config 00:03:03.150 crypto/scheduler: not in enabled drivers build config 00:03:03.150 crypto/uadk: not in enabled drivers build config 00:03:03.150 crypto/virtio: not in enabled drivers build config 00:03:03.150 compress/isal: not in enabled drivers build config 00:03:03.150 compress/mlx5: not in enabled drivers build config 00:03:03.150 compress/octeontx: not in enabled drivers build config 00:03:03.150 compress/zlib: not in enabled drivers build config 00:03:03.150 regex/mlx5: not in enabled drivers build config 00:03:03.150 regex/cn9k: not in enabled drivers build config 00:03:03.150 ml/cnxk: not in enabled drivers build config 00:03:03.150 vdpa/ifc: not in enabled drivers build config 00:03:03.150 vdpa/mlx5: not in enabled drivers build config 00:03:03.150 vdpa/nfp: not in enabled drivers build config 00:03:03.150 vdpa/sfc: not in enabled drivers build config 00:03:03.150 event/cnxk: not in enabled drivers build config 00:03:03.150 event/dlb2: not in enabled drivers build config 00:03:03.150 event/dpaa: not in enabled drivers build config 00:03:03.150 event/dpaa2: not in enabled drivers build config 00:03:03.150 event/dsw: not in enabled drivers build config 00:03:03.150 event/opdl: not in enabled drivers build config 00:03:03.150 event/skeleton: not in enabled drivers build config 00:03:03.150 event/sw: not in enabled drivers build config 00:03:03.150 event/octeontx: not in enabled drivers build config 00:03:03.150 baseband/acc: not in enabled drivers build config 00:03:03.150 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:03.150 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:03.150 baseband/la12xx: not in enabled drivers build config 00:03:03.150 baseband/null: not in enabled drivers build config 00:03:03.150 baseband/turbo_sw: not in enabled drivers build config 00:03:03.150 gpu/cuda: not in enabled drivers build config 00:03:03.150 00:03:03.150 00:03:03.150 Build targets in project: 215 00:03:03.150 00:03:03.150 DPDK 23.11.0 00:03:03.150 00:03:03.150 User defined options 00:03:03.151 libdir : lib 00:03:03.151 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:03.151 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:03.151 c_link_args : 00:03:03.151 enable_docs : false 00:03:03.151 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:03.151 enable_kmods : false 00:03:03.151 machine : native 00:03:03.151 tests : false 00:03:03.151 00:03:03.151 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.151 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:03.151 13:15:59 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:03.151 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:03.151 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:03.151 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:03.151 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:03.151 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:03.151 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.151 [6/705] Linking static target lib/librte_kvargs.a 00:03:03.411 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.411 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:03.411 [9/705] Linking static target lib/librte_log.a 00:03:03.411 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.411 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.411 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:03.411 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:03.411 [14/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:03.672 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:03.672 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:03.672 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.672 [18/705] Linking target lib/librte_log.so.24.0 00:03:03.672 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:03.672 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:03.932 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:03.933 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:03.933 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:03.933 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:03.933 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:03.933 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:03.933 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.933 [28/705] Linking target lib/librte_kvargs.so.24.0 00:03:03.933 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:04.193 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.193 [31/705] Linking static target lib/librte_telemetry.a 00:03:04.193 [32/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:04.193 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:04.193 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:04.193 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:04.193 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:04.193 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:04.193 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:04.193 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:04.193 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:04.454 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:04.454 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.454 [43/705] Linking target lib/librte_telemetry.so.24.0 00:03:04.454 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:04.454 [45/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:04.454 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:04.713 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:04.713 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:04.713 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:04.713 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:04.714 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:04.714 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:04.714 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:04.714 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.972 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.972 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.972 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:04.972 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:04.972 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.972 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.972 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.972 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:04.972 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.972 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.972 [65/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:05.231 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:05.231 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:05.231 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:05.231 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:05.231 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:05.231 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:05.231 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:05.490 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:05.490 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:05.490 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:05.490 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:05.490 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:05.490 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:05.490 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:05.747 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:05.747 [81/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:05.747 [82/705] Linking static target lib/librte_ring.a 00:03:05.747 [83/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:05.747 [84/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:05.747 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:05.747 [86/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.005 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:06.005 [88/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:06.005 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:06.005 [90/705] Linking static target lib/librte_eal.a 00:03:06.005 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:06.005 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:06.005 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:06.005 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:06.005 [95/705] Linking static target lib/librte_mempool.a 00:03:06.263 [96/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:06.263 [97/705] Linking static target lib/librte_rcu.a 00:03:06.263 [98/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:06.263 [99/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:06.263 [100/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:06.263 [101/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:06.263 [102/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:06.263 [103/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:06.521 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.521 [105/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.521 [106/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:06.521 [107/705] Linking static target lib/librte_meter.a 00:03:06.522 [108/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:06.522 [109/705] Linking static target lib/librte_net.a 00:03:06.781 [110/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:06.781 [111/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:06.781 [112/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:06.781 [113/705] Linking static target lib/librte_mbuf.a 00:03:06.781 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:06.781 [115/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.781 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.781 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:07.039 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.039 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:07.039 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:07.297 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:07.297 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:07.297 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:07.297 [124/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:07.297 [125/705] Linking static target lib/librte_pci.a 00:03:07.297 [126/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:07.297 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:07.555 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:07.555 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:07.555 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:07.555 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.555 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:07.555 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:07.556 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:07.556 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:07.556 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:07.556 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:07.814 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:07.814 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:07.814 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:07.814 [141/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:07.814 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.814 [143/705] Linking static target lib/librte_cmdline.a 00:03:07.814 [144/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:07.814 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:08.071 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:08.071 [147/705] Linking static target lib/librte_metrics.a 00:03:08.071 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:08.072 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:08.329 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.329 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:08.329 [152/705] Linking static target lib/librte_timer.a 00:03:08.329 [153/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.329 [154/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:08.329 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:08.588 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.588 [157/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:08.588 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:08.588 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:08.846 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:08.846 [161/705] Linking static target lib/librte_bitratestats.a 00:03:08.846 [162/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:09.105 [163/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.105 [164/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:09.105 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:09.105 [166/705] Linking static target lib/librte_bbdev.a 00:03:09.105 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:09.363 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:09.363 [169/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:09.363 [170/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:09.363 [171/705] Linking static target lib/librte_hash.a 00:03:09.363 [172/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.621 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:09.621 [174/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:09.621 [175/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:09.621 [176/705] Linking static target lib/librte_ethdev.a 00:03:09.879 [177/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:09.879 [178/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.879 [179/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:09.879 [180/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.879 [181/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:09.879 [182/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:09.879 [183/705] Linking static target lib/librte_cfgfile.a 00:03:09.879 [184/705] Linking target lib/librte_eal.so.24.0 00:03:10.136 [185/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:10.136 [186/705] Linking static target lib/acl/libavx2_tmp.a 00:03:10.136 [187/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:10.136 [188/705] Linking target lib/librte_ring.so.24.0 00:03:10.136 [189/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.136 [190/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:10.136 [191/705] Linking target lib/librte_meter.so.24.0 00:03:10.136 [192/705] Linking target lib/librte_rcu.so.24.0 00:03:10.136 [193/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:10.136 [194/705] Linking target lib/librte_mempool.so.24.0 00:03:10.136 [195/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:10.136 [196/705] Linking target lib/librte_pci.so.24.0 00:03:10.136 [197/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:10.136 [198/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:10.395 [199/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:10.395 [200/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:10.395 [201/705] Linking target lib/librte_timer.so.24.0 00:03:10.395 [202/705] Linking target lib/librte_cfgfile.so.24.0 00:03:10.395 [203/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:10.395 [204/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:10.395 [205/705] Linking target lib/librte_mbuf.so.24.0 00:03:10.395 [206/705] Linking static target lib/librte_compressdev.a 00:03:10.395 [207/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:10.395 [208/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:10.395 [209/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:10.395 [210/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:10.395 [211/705] Linking static target lib/librte_bpf.a 00:03:10.395 [212/705] Linking target lib/librte_net.so.24.0 00:03:10.395 [213/705] Linking target lib/librte_bbdev.so.24.0 00:03:10.653 [214/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:10.653 [215/705] Linking target lib/librte_cmdline.so.24.0 00:03:10.653 [216/705] Linking target lib/librte_hash.so.24.0 00:03:10.653 [217/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.653 [218/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:10.653 [219/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:10.653 [220/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:10.653 [221/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.653 [222/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:10.653 [223/705] Linking target lib/librte_compressdev.so.24.0 00:03:10.653 [224/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:10.653 [225/705] Linking static target lib/librte_acl.a 00:03:10.912 [226/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:10.912 [227/705] Linking static target lib/librte_distributor.a 00:03:10.912 [228/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.912 [229/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:10.912 [230/705] Linking target lib/librte_acl.so.24.0 00:03:10.912 [231/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:11.170 [232/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:11.170 [233/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.170 [234/705] Linking target lib/librte_distributor.so.24.0 00:03:11.170 [235/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:11.429 [236/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:11.429 [237/705] Linking static target lib/librte_dmadev.a 00:03:11.429 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:11.429 [239/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:11.429 [240/705] Linking static target lib/librte_efd.a 00:03:11.688 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:11.688 [242/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.688 [243/705] Linking target lib/librte_dmadev.so.24.0 00:03:11.688 [244/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.688 [245/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:11.688 [246/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:11.688 [247/705] Linking target lib/librte_efd.so.24.0 00:03:11.688 [248/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:11.688 [249/705] Linking static target lib/librte_cryptodev.a 00:03:11.946 [250/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:11.946 [251/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:11.946 [252/705] Linking static target lib/librte_dispatcher.a 00:03:12.204 [253/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:12.204 [254/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.204 [255/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:12.204 [256/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:12.204 [257/705] Linking static target lib/librte_gpudev.a 00:03:12.204 [258/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:12.462 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:12.462 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:12.720 [261/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.720 [262/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:12.720 [263/705] Linking target lib/librte_cryptodev.so.24.0 00:03:12.720 [264/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:12.720 [265/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:12.720 [266/705] Linking static target lib/librte_gro.a 00:03:12.720 [267/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:12.720 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:12.720 [269/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.720 [270/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.979 [271/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.979 [272/705] Linking target lib/librte_gpudev.so.24.0 00:03:12.979 [273/705] Linking target lib/librte_ethdev.so.24.0 00:03:12.979 [274/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:12.979 [275/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:12.979 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:12.979 [277/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:12.979 [278/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:12.979 [279/705] Linking static target lib/librte_eventdev.a 00:03:12.979 [280/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:12.979 [281/705] Linking target lib/librte_metrics.so.24.0 00:03:12.979 [282/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:12.979 [283/705] Linking target lib/librte_bpf.so.24.0 00:03:12.979 [284/705] Linking target lib/librte_gro.so.24.0 00:03:12.979 [285/705] Linking static target lib/librte_gso.a 00:03:12.979 [286/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:12.979 [287/705] Linking target lib/librte_bitratestats.so.24.0 00:03:12.979 [288/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:13.245 [289/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.245 [290/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:13.245 [291/705] Linking target lib/librte_gso.so.24.0 00:03:13.245 [292/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:13.245 [293/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:13.245 [294/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:13.245 [295/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:13.245 [296/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:13.245 [297/705] Linking static target lib/librte_jobstats.a 00:03:13.506 [298/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:13.506 [299/705] Linking static target lib/librte_ip_frag.a 00:03:13.506 [300/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:13.507 [301/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.507 [302/705] Linking static target lib/librte_latencystats.a 00:03:13.507 [303/705] Linking target lib/librte_jobstats.so.24.0 00:03:13.507 [304/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:13.507 [305/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:13.765 [306/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.765 [307/705] Linking target lib/librte_ip_frag.so.24.0 00:03:13.765 [308/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:13.765 [309/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.765 [310/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:13.765 [311/705] Linking target lib/librte_latencystats.so.24.0 00:03:13.765 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:13.765 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:14.023 [314/705] Linking static target lib/librte_lpm.a 00:03:14.024 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:14.024 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:14.024 [317/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.024 [318/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:14.024 [319/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:14.282 [320/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:14.282 [321/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.282 [322/705] Linking target lib/librte_lpm.so.24.0 00:03:14.282 [323/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:14.282 [324/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:14.282 [325/705] Linking static target lib/librte_pcapng.a 00:03:14.282 [326/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:14.282 [327/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:14.282 [328/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:14.282 [329/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:14.541 [330/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.541 [331/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.541 [332/705] Linking target lib/librte_eventdev.so.24.0 00:03:14.541 [333/705] Linking target lib/librte_pcapng.so.24.0 00:03:14.541 [334/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:14.541 [335/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:14.541 [336/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:14.541 [337/705] Linking target lib/librte_dispatcher.so.24.0 00:03:14.541 [338/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:14.541 [339/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:14.541 [340/705] Linking static target lib/librte_power.a 00:03:14.799 [341/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:14.799 [342/705] Linking static target lib/librte_member.a 00:03:14.799 [343/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:14.799 [344/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:14.799 [345/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:14.799 [346/705] Linking static target lib/librte_regexdev.a 00:03:14.799 [347/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:14.799 [348/705] Linking static target lib/librte_rawdev.a 00:03:14.799 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:14.799 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:15.058 [351/705] Linking static target lib/librte_mldev.a 00:03:15.058 [352/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.058 [353/705] Linking target lib/librte_member.so.24.0 00:03:15.058 [354/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:15.058 [355/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:15.058 [356/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:15.058 [357/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.058 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.058 [359/705] Linking target lib/librte_rawdev.so.24.0 00:03:15.316 [360/705] Linking target lib/librte_power.so.24.0 00:03:15.316 [361/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:15.316 [362/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.316 [363/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:15.316 [364/705] Linking static target lib/librte_reorder.a 00:03:15.316 [365/705] Linking target lib/librte_regexdev.so.24.0 00:03:15.316 [366/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:15.316 [367/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:15.316 [368/705] Linking static target lib/librte_rib.a 00:03:15.316 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:15.316 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:15.574 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:15.574 [372/705] Linking static target lib/librte_stack.a 00:03:15.574 [373/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.574 [374/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:15.574 [375/705] Linking static target lib/librte_security.a 00:03:15.574 [376/705] Linking target lib/librte_reorder.so.24.0 00:03:15.574 [377/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.574 [378/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:15.574 [379/705] Linking target lib/librte_stack.so.24.0 00:03:15.831 [380/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:15.831 [381/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.831 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.831 [383/705] Linking target lib/librte_rib.so.24.0 00:03:15.831 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.831 [385/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:15.831 [386/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.831 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.831 [388/705] Linking target lib/librte_mldev.so.24.0 00:03:15.831 [389/705] Linking target lib/librte_security.so.24.0 00:03:16.089 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:16.089 [391/705] Linking static target lib/librte_sched.a 00:03:16.089 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:16.089 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:16.347 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:16.347 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.347 [396/705] Linking target lib/librte_sched.so.24.0 00:03:16.347 [397/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:16.347 [398/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:16.605 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:16.605 [400/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:16.605 [401/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.605 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:16.863 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:16.863 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:16.863 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:16.863 [406/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:16.863 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:17.121 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:17.121 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:17.121 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:17.121 [411/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:17.121 [412/705] Linking static target lib/librte_ipsec.a 00:03:17.121 [413/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:17.379 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.379 [415/705] Linking target lib/librte_ipsec.so.24.0 00:03:17.379 [416/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:17.379 [417/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:17.379 [418/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:17.637 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:17.637 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:17.637 [421/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:17.637 [422/705] Linking static target lib/librte_fib.a 00:03:17.895 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:17.895 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:17.895 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:17.895 [426/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.895 [427/705] Linking target lib/librte_fib.so.24.0 00:03:17.895 [428/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:17.895 [429/705] Linking static target lib/librte_pdcp.a 00:03:17.895 [430/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:18.153 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.153 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:18.153 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:18.411 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:18.411 [435/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:18.411 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:18.411 [437/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:18.411 [438/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:18.668 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:18.668 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:18.668 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:18.668 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:18.926 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:18.926 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:18.926 [445/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:18.926 [446/705] Linking static target lib/librte_port.a 00:03:18.926 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:18.926 [448/705] Linking static target lib/librte_pdump.a 00:03:18.926 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:19.183 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:19.183 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:19.183 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.183 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:19.183 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.183 [455/705] Linking target lib/librte_port.so.24.0 00:03:19.441 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:19.441 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:19.441 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:19.441 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:19.441 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:19.441 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:19.699 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:19.699 [463/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:19.699 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:19.699 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:19.699 [466/705] Linking static target lib/librte_table.a 00:03:19.957 [467/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:19.957 [468/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:20.215 [469/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.215 [470/705] Linking target lib/librte_table.so.24.0 00:03:20.215 [471/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.215 [472/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:20.215 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:20.215 [474/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:20.473 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:20.473 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:20.731 [477/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:20.731 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:20.731 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:20.731 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:20.731 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:20.989 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:20.989 [483/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:20.989 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:20.989 [485/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:20.989 [486/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:20.989 [487/705] Linking static target lib/librte_graph.a 00:03:20.989 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:21.247 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:21.506 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.506 [491/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:21.506 [492/705] Linking target lib/librte_graph.so.24.0 00:03:21.506 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:21.506 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:21.506 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:21.506 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:21.506 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:21.764 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:21.764 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:21.764 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:21.765 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:21.765 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:22.023 [503/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:22.023 [504/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:22.023 [505/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:22.023 [506/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:22.023 [507/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:22.023 [508/705] Linking static target lib/librte_node.a 00:03:22.023 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:22.281 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:22.282 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.282 [512/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:22.282 [513/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:22.282 [514/705] Linking target lib/librte_node.so.24.0 00:03:22.282 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:22.282 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:22.540 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:22.540 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.540 [519/705] Linking static target drivers/librte_bus_pci.a 00:03:22.540 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:22.540 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.540 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.540 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:22.540 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:22.540 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.540 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:22.540 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:22.799 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.799 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:22.799 [530/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.799 [531/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:22.799 [532/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:22.799 [533/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.799 [534/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:22.799 [535/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:22.799 [536/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.799 [537/705] Linking static target drivers/librte_mempool_ring.a 00:03:22.799 [538/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:22.799 [539/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:23.080 [540/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:23.080 [541/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:23.336 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:23.337 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:23.594 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:23.594 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:24.161 [546/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:24.161 [547/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:24.161 [548/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:24.161 [549/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:24.161 [550/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:24.161 [551/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:24.161 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:24.419 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:24.419 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:24.677 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:24.677 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:24.677 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:24.935 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:24.936 [559/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:24.936 [560/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:24.936 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:25.194 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:25.194 [563/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:25.194 [564/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:25.194 [565/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:25.194 [566/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:25.452 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:25.452 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:25.452 [569/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:25.452 [570/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:25.452 [571/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:25.710 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:25.710 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:25.710 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:25.967 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:25.967 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:25.967 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:25.967 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:25.967 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:25.967 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:26.225 [581/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:26.225 [582/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:26.226 [583/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:26.226 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:26.226 [585/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:26.226 [586/705] Linking static target drivers/librte_net_i40e.a 00:03:26.226 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:26.484 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:26.742 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.742 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:26.742 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:26.742 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:26.742 [593/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:26.742 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:26.742 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:27.000 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:27.000 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:27.258 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:27.258 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:27.258 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:27.258 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:27.258 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:27.258 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:27.258 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:27.515 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:27.515 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:27.515 [607/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:27.515 [608/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:27.515 [609/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:27.773 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:27.773 [611/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:27.773 [612/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:27.773 [613/705] Linking static target lib/librte_vhost.a 00:03:27.773 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:28.031 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:28.031 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:28.597 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:28.597 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:28.597 [619/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.597 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:28.597 [621/705] Linking target lib/librte_vhost.so.24.0 00:03:28.597 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:28.597 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:28.597 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:28.856 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:28.856 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:28.856 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:28.856 [628/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:28.856 [629/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:29.114 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:29.114 [631/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:29.114 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:29.114 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:29.114 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:29.114 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:29.373 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:29.373 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:29.373 [638/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:29.373 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:29.631 [640/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:29.631 [641/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:29.631 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:29.631 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:29.631 [644/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:29.631 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:29.890 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:29.890 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:29.890 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:29.890 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:30.148 [650/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:30.148 [651/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:30.148 [652/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:30.148 [653/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:30.148 [654/705] Linking static target lib/librte_pipeline.a 00:03:30.421 [655/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:30.421 [656/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:30.421 [657/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:30.421 [658/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:30.685 [659/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:30.685 [660/705] Linking target app/dpdk-dumpcap 00:03:30.685 [661/705] Linking target app/dpdk-graph 00:03:30.685 [662/705] Linking target app/dpdk-pdump 00:03:30.685 [663/705] Linking target app/dpdk-proc-info 00:03:30.943 [664/705] Linking target app/dpdk-test-acl 00:03:30.943 [665/705] Linking target app/dpdk-test-cmdline 00:03:30.943 [666/705] Linking target app/dpdk-test-bbdev 00:03:30.943 [667/705] Linking target app/dpdk-test-compress-perf 00:03:31.202 [668/705] Linking target app/dpdk-test-crypto-perf 00:03:31.202 [669/705] Linking target app/dpdk-test-dma-perf 00:03:31.202 [670/705] Linking target app/dpdk-test-fib 00:03:31.202 [671/705] Linking target app/dpdk-test-eventdev 00:03:31.461 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:31.461 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:31.461 [674/705] Linking target app/dpdk-test-flow-perf 00:03:31.461 [675/705] Linking target app/dpdk-test-gpudev 00:03:31.461 [676/705] Linking target app/dpdk-test-mldev 00:03:31.794 [677/705] Linking target app/dpdk-test-pipeline 00:03:31.794 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:31.794 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:31.794 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:31.794 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:31.794 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:31.794 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:32.053 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:32.053 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:32.053 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:32.311 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:32.311 [688/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.311 [689/705] Linking target lib/librte_pipeline.so.24.0 00:03:32.311 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:32.311 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:32.311 [692/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:32.569 [693/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:32.569 [694/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:32.569 [695/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:32.569 [696/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:32.829 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:32.829 [698/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:32.829 [699/705] Linking target app/dpdk-test-sad 00:03:33.087 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:33.087 [701/705] Linking target app/dpdk-test-regex 00:03:33.087 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:33.087 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:33.344 [704/705] Linking target app/dpdk-test-security-perf 00:03:33.602 [705/705] Linking target app/dpdk-testpmd 00:03:33.602 13:16:29 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:33.602 13:16:29 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:33.602 13:16:29 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:33.602 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:33.602 [0/1] Installing files. 00:03:33.864 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.864 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.865 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.866 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.867 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.868 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.869 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.869 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.869 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.869 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.869 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.869 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.869 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.869 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.869 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.870 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.870 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.127 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.127 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.127 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.127 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.127 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.127 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.127 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.127 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.127 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.127 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.127 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.128 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.129 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.129 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:34.129 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:34.129 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:34.129 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:34.129 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:34.129 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:34.129 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:34.129 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:34.129 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:34.129 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:34.129 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:34.129 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:34.129 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:34.129 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:34.129 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:34.129 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:34.129 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:34.129 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:34.129 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:34.129 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:34.129 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:34.129 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:34.129 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:34.129 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:34.129 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:34.129 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:34.129 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:34.129 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:34.129 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:34.129 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:34.129 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:34.129 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:34.129 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:34.129 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:34.129 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:34.129 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:34.129 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:34.129 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:34.129 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:34.129 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:34.129 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:34.129 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:34.129 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:34.129 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:34.129 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:34.129 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:34.129 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:34.129 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:34.129 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:34.129 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:34.129 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:34.129 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:34.129 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:34.129 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:34.129 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:34.129 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:34.129 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:34.129 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:34.129 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:34.129 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:34.129 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:34.129 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:34.129 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:34.129 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:34.129 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:34.129 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:34.129 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:34.129 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:34.129 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:34.129 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:34.129 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:34.129 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:34.129 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:34.129 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:34.129 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:34.129 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:34.129 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:34.129 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:34.129 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:34.129 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:34.129 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:34.129 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:34.129 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:34.129 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:34.129 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:34.129 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:34.129 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:34.129 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:34.129 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:34.129 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:34.129 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:34.129 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:34.129 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:34.129 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:34.129 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:34.129 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:34.129 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:34.129 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:34.129 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:34.129 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:34.129 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:34.129 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:34.129 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:34.129 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:34.129 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:34.129 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:34.129 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:34.129 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:34.129 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:34.129 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:34.129 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:34.129 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:34.129 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:34.129 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:34.129 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:34.129 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:34.129 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:34.129 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:34.129 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:34.129 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:34.129 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:34.129 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:34.129 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:34.129 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:34.129 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:34.129 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:34.129 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:34.129 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:34.129 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:34.129 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:34.130 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:34.130 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:34.130 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:34.130 13:16:30 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:34.130 13:16:30 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:34.130 00:03:34.130 real 0m36.869s 00:03:34.130 user 4m17.423s 00:03:34.130 sys 0m37.424s 00:03:34.130 13:16:30 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:34.130 13:16:30 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:34.130 ************************************ 00:03:34.130 END TEST build_native_dpdk 00:03:34.130 ************************************ 00:03:34.385 13:16:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:34.385 13:16:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:34.385 13:16:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:34.385 13:16:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:34.385 13:16:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:34.385 13:16:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:34.385 13:16:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:34.385 13:16:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:34.385 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:34.385 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.385 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:34.385 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:34.643 Using 'verbs' RDMA provider 00:03:45.550 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:55.547 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:55.808 Creating mk/config.mk...done. 00:03:55.808 Creating mk/cc.flags.mk...done. 00:03:55.808 Type 'make' to build. 00:03:55.808 13:16:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:55.808 13:16:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:55.808 13:16:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:55.808 13:16:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:55.808 ************************************ 00:03:55.808 START TEST make 00:03:55.808 ************************************ 00:03:55.808 13:16:51 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:56.069 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:56.069 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:56.069 meson setup builddir \ 00:03:56.069 -Dwith-libaio=enabled \ 00:03:56.069 -Dwith-liburing=enabled \ 00:03:56.069 -Dwith-libvfn=disabled \ 00:03:56.069 -Dwith-spdk=disabled \ 00:03:56.069 -Dexamples=false \ 00:03:56.069 -Dtests=false \ 00:03:56.069 -Dtools=false && \ 00:03:56.069 meson compile -C builddir && \ 00:03:56.069 cd -) 00:03:56.069 make[1]: Nothing to be done for 'all'. 00:03:58.052 The Meson build system 00:03:58.052 Version: 1.5.0 00:03:58.052 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:58.052 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.052 Build type: native build 00:03:58.052 Project name: xnvme 00:03:58.052 Project version: 0.7.5 00:03:58.052 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.052 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:58.052 Host machine cpu family: x86_64 00:03:58.052 Host machine cpu: x86_64 00:03:58.052 Message: host_machine.system: linux 00:03:58.052 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:58.052 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:58.052 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:58.052 Run-time dependency threads found: YES 00:03:58.052 Has header "setupapi.h" : NO 00:03:58.052 Has header "linux/blkzoned.h" : YES 00:03:58.052 Has header "linux/blkzoned.h" : YES (cached) 00:03:58.052 Has header "libaio.h" : YES 00:03:58.052 Library aio found: YES 00:03:58.052 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.052 Run-time dependency liburing found: YES 2.2 00:03:58.052 Dependency libvfn skipped: feature with-libvfn disabled 00:03:58.052 Found CMake: /usr/bin/cmake (3.27.7) 00:03:58.052 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:58.052 Subproject spdk : skipped: feature with-spdk disabled 00:03:58.052 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.052 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.052 Library rt found: YES 00:03:58.052 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:58.052 Configuring xnvme_config.h using configuration 00:03:58.052 Configuring xnvme.spec using configuration 00:03:58.052 Run-time dependency bash-completion found: YES 2.11 00:03:58.052 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:58.052 Program cp found: YES (/usr/bin/cp) 00:03:58.052 Build targets in project: 3 00:03:58.052 00:03:58.052 xnvme 0.7.5 00:03:58.052 00:03:58.052 Subprojects 00:03:58.052 spdk : NO Feature 'with-spdk' disabled 00:03:58.052 00:03:58.052 User defined options 00:03:58.052 examples : false 00:03:58.052 tests : false 00:03:58.052 tools : false 00:03:58.052 with-libaio : enabled 00:03:58.052 with-liburing: enabled 00:03:58.052 with-libvfn : disabled 00:03:58.052 with-spdk : disabled 00:03:58.052 00:03:58.052 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:58.314 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:58.314 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:58.314 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:58.314 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:58.314 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:58.314 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:58.314 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:58.314 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:58.314 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:58.314 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:58.314 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:58.314 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:58.577 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:58.577 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:58.577 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:58.577 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:58.577 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:58.577 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:58.577 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:58.577 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:58.577 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:58.577 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:58.577 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:58.577 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:58.577 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:58.577 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:58.577 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:58.577 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:58.577 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:58.577 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:58.577 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:58.577 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:58.577 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:58.577 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:58.577 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:58.577 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:58.577 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:58.577 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:58.577 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:58.577 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:58.577 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:58.577 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:58.839 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:58.839 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:58.839 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:58.839 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:58.839 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:58.839 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:58.839 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:58.839 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:58.839 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:58.839 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:58.839 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:58.839 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:58.839 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:58.839 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:58.839 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:58.839 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:58.839 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:58.839 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:58.839 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:58.839 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:58.839 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:58.839 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:58.839 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:58.839 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:58.839 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:58.839 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:59.100 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:59.100 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:59.100 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:59.100 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:59.100 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:59.100 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:59.361 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:59.361 [75/76] Linking static target lib/libxnvme.a 00:03:59.361 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:59.361 INFO: autodetecting backend as ninja 00:03:59.361 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:59.622 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:38.364 CC lib/log/log_flags.o 00:04:38.364 CC lib/ut_mock/mock.o 00:04:38.364 CC lib/log/log.o 00:04:38.364 CC lib/log/log_deprecated.o 00:04:38.364 CC lib/ut/ut.o 00:04:38.364 LIB libspdk_ut_mock.a 00:04:38.364 LIB libspdk_log.a 00:04:38.364 LIB libspdk_ut.a 00:04:38.364 SO libspdk_ut_mock.so.6.0 00:04:38.364 SO libspdk_log.so.7.1 00:04:38.364 SO libspdk_ut.so.2.0 00:04:38.364 SYMLINK libspdk_ut_mock.so 00:04:38.364 SYMLINK libspdk_ut.so 00:04:38.364 SYMLINK libspdk_log.so 00:04:38.364 CC lib/dma/dma.o 00:04:38.364 CXX lib/trace_parser/trace.o 00:04:38.364 CC lib/ioat/ioat.o 00:04:38.364 CC lib/util/base64.o 00:04:38.364 CC lib/util/bit_array.o 00:04:38.364 CC lib/util/cpuset.o 00:04:38.364 CC lib/util/crc16.o 00:04:38.364 CC lib/util/crc32c.o 00:04:38.364 CC lib/util/crc32.o 00:04:38.364 CC lib/vfio_user/host/vfio_user_pci.o 00:04:38.364 CC lib/util/crc32_ieee.o 00:04:38.364 CC lib/vfio_user/host/vfio_user.o 00:04:38.364 CC lib/util/crc64.o 00:04:38.364 CC lib/util/dif.o 00:04:38.364 CC lib/util/fd.o 00:04:38.364 LIB libspdk_dma.a 00:04:38.364 CC lib/util/fd_group.o 00:04:38.364 SO libspdk_dma.so.5.0 00:04:38.364 CC lib/util/file.o 00:04:38.364 CC lib/util/hexlify.o 00:04:38.364 SYMLINK libspdk_dma.so 00:04:38.364 CC lib/util/iov.o 00:04:38.364 LIB libspdk_ioat.a 00:04:38.364 CC lib/util/math.o 00:04:38.364 SO libspdk_ioat.so.7.0 00:04:38.364 CC lib/util/net.o 00:04:38.364 CC lib/util/pipe.o 00:04:38.364 LIB libspdk_vfio_user.a 00:04:38.364 SYMLINK libspdk_ioat.so 00:04:38.364 CC lib/util/strerror_tls.o 00:04:38.365 SO libspdk_vfio_user.so.5.0 00:04:38.365 CC lib/util/string.o 00:04:38.365 SYMLINK libspdk_vfio_user.so 00:04:38.365 CC lib/util/uuid.o 00:04:38.365 CC lib/util/xor.o 00:04:38.365 CC lib/util/zipf.o 00:04:38.365 CC lib/util/md5.o 00:04:38.365 LIB libspdk_util.a 00:04:38.365 SO libspdk_util.so.10.1 00:04:38.365 LIB libspdk_trace_parser.a 00:04:38.365 SYMLINK libspdk_util.so 00:04:38.365 SO libspdk_trace_parser.so.6.0 00:04:38.365 SYMLINK libspdk_trace_parser.so 00:04:38.365 CC lib/conf/conf.o 00:04:38.365 CC lib/json/json_parse.o 00:04:38.365 CC lib/json/json_write.o 00:04:38.365 CC lib/json/json_util.o 00:04:38.365 CC lib/vmd/vmd.o 00:04:38.365 CC lib/vmd/led.o 00:04:38.365 CC lib/idxd/idxd.o 00:04:38.365 CC lib/rdma_utils/rdma_utils.o 00:04:38.365 CC lib/idxd/idxd_user.o 00:04:38.365 CC lib/env_dpdk/env.o 00:04:38.365 CC lib/env_dpdk/memory.o 00:04:38.365 CC lib/env_dpdk/pci.o 00:04:38.365 CC lib/idxd/idxd_kernel.o 00:04:38.365 LIB libspdk_conf.a 00:04:38.365 CC lib/env_dpdk/init.o 00:04:38.365 SO libspdk_conf.so.6.0 00:04:38.365 LIB libspdk_rdma_utils.a 00:04:38.365 LIB libspdk_json.a 00:04:38.365 SYMLINK libspdk_conf.so 00:04:38.365 CC lib/env_dpdk/threads.o 00:04:38.365 SO libspdk_rdma_utils.so.1.0 00:04:38.365 SO libspdk_json.so.6.0 00:04:38.365 CC lib/env_dpdk/pci_ioat.o 00:04:38.365 SYMLINK libspdk_rdma_utils.so 00:04:38.365 CC lib/env_dpdk/pci_virtio.o 00:04:38.365 SYMLINK libspdk_json.so 00:04:38.365 CC lib/env_dpdk/pci_vmd.o 00:04:38.365 CC lib/env_dpdk/pci_idxd.o 00:04:38.365 CC lib/env_dpdk/pci_event.o 00:04:38.365 CC lib/env_dpdk/sigbus_handler.o 00:04:38.365 CC lib/rdma_provider/common.o 00:04:38.365 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:38.365 CC lib/env_dpdk/pci_dpdk.o 00:04:38.365 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:38.365 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:38.365 LIB libspdk_idxd.a 00:04:38.365 LIB libspdk_vmd.a 00:04:38.365 SO libspdk_idxd.so.12.1 00:04:38.365 SO libspdk_vmd.so.6.0 00:04:38.365 CC lib/jsonrpc/jsonrpc_server.o 00:04:38.365 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:38.365 CC lib/jsonrpc/jsonrpc_client.o 00:04:38.365 LIB libspdk_rdma_provider.a 00:04:38.365 SO libspdk_rdma_provider.so.7.0 00:04:38.365 SYMLINK libspdk_vmd.so 00:04:38.365 SYMLINK libspdk_idxd.so 00:04:38.365 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:38.365 SYMLINK libspdk_rdma_provider.so 00:04:38.365 LIB libspdk_jsonrpc.a 00:04:38.365 SO libspdk_jsonrpc.so.6.0 00:04:38.365 SYMLINK libspdk_jsonrpc.so 00:04:38.365 CC lib/rpc/rpc.o 00:04:38.365 LIB libspdk_env_dpdk.a 00:04:38.365 SO libspdk_env_dpdk.so.15.1 00:04:38.365 LIB libspdk_rpc.a 00:04:38.365 SO libspdk_rpc.so.6.0 00:04:38.365 SYMLINK libspdk_rpc.so 00:04:38.365 SYMLINK libspdk_env_dpdk.so 00:04:38.365 CC lib/keyring/keyring.o 00:04:38.365 CC lib/keyring/keyring_rpc.o 00:04:38.365 CC lib/notify/notify.o 00:04:38.365 CC lib/trace/trace_flags.o 00:04:38.365 CC lib/trace/trace.o 00:04:38.365 CC lib/trace/trace_rpc.o 00:04:38.365 CC lib/notify/notify_rpc.o 00:04:38.365 LIB libspdk_notify.a 00:04:38.365 SO libspdk_notify.so.6.0 00:04:38.365 LIB libspdk_trace.a 00:04:38.365 LIB libspdk_keyring.a 00:04:38.365 SYMLINK libspdk_notify.so 00:04:38.365 SO libspdk_keyring.so.2.0 00:04:38.365 SO libspdk_trace.so.11.0 00:04:38.365 SYMLINK libspdk_keyring.so 00:04:38.365 SYMLINK libspdk_trace.so 00:04:38.365 CC lib/thread/iobuf.o 00:04:38.365 CC lib/thread/thread.o 00:04:38.365 CC lib/sock/sock.o 00:04:38.365 CC lib/sock/sock_rpc.o 00:04:38.365 LIB libspdk_sock.a 00:04:38.365 SO libspdk_sock.so.10.0 00:04:38.365 SYMLINK libspdk_sock.so 00:04:38.365 CC lib/nvme/nvme_fabric.o 00:04:38.365 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:38.365 CC lib/nvme/nvme_ctrlr.o 00:04:38.365 CC lib/nvme/nvme_ns_cmd.o 00:04:38.365 CC lib/nvme/nvme_ns.o 00:04:38.365 CC lib/nvme/nvme_pcie_common.o 00:04:38.365 CC lib/nvme/nvme_pcie.o 00:04:38.365 CC lib/nvme/nvme_qpair.o 00:04:38.365 CC lib/nvme/nvme.o 00:04:38.365 LIB libspdk_thread.a 00:04:38.365 SO libspdk_thread.so.11.0 00:04:38.365 CC lib/nvme/nvme_quirks.o 00:04:38.365 SYMLINK libspdk_thread.so 00:04:38.365 CC lib/nvme/nvme_transport.o 00:04:38.365 CC lib/nvme/nvme_discovery.o 00:04:38.365 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:38.365 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:38.365 CC lib/accel/accel.o 00:04:38.365 CC lib/accel/accel_rpc.o 00:04:38.365 CC lib/accel/accel_sw.o 00:04:38.365 CC lib/nvme/nvme_tcp.o 00:04:38.365 CC lib/nvme/nvme_opal.o 00:04:38.365 CC lib/nvme/nvme_io_msg.o 00:04:38.365 CC lib/nvme/nvme_poll_group.o 00:04:38.365 CC lib/nvme/nvme_zns.o 00:04:38.365 CC lib/blob/blobstore.o 00:04:38.365 CC lib/blob/request.o 00:04:38.365 CC lib/blob/zeroes.o 00:04:38.623 CC lib/blob/blob_bs_dev.o 00:04:38.881 CC lib/nvme/nvme_stubs.o 00:04:38.881 CC lib/nvme/nvme_auth.o 00:04:38.881 CC lib/nvme/nvme_cuse.o 00:04:38.881 LIB libspdk_accel.a 00:04:38.881 CC lib/nvme/nvme_rdma.o 00:04:38.881 SO libspdk_accel.so.16.0 00:04:39.138 SYMLINK libspdk_accel.so 00:04:39.138 CC lib/init/json_config.o 00:04:39.138 CC lib/virtio/virtio.o 00:04:39.138 CC lib/fsdev/fsdev.o 00:04:39.396 CC lib/init/subsystem.o 00:04:39.396 CC lib/init/subsystem_rpc.o 00:04:39.396 CC lib/virtio/virtio_vhost_user.o 00:04:39.396 CC lib/fsdev/fsdev_io.o 00:04:39.396 CC lib/init/rpc.o 00:04:39.396 CC lib/fsdev/fsdev_rpc.o 00:04:39.654 CC lib/virtio/virtio_vfio_user.o 00:04:39.654 LIB libspdk_init.a 00:04:39.654 CC lib/bdev/bdev.o 00:04:39.654 CC lib/bdev/bdev_rpc.o 00:04:39.654 SO libspdk_init.so.6.0 00:04:39.654 SYMLINK libspdk_init.so 00:04:39.654 CC lib/bdev/bdev_zone.o 00:04:39.654 CC lib/bdev/part.o 00:04:39.654 CC lib/bdev/scsi_nvme.o 00:04:39.911 CC lib/virtio/virtio_pci.o 00:04:39.911 LIB libspdk_fsdev.a 00:04:39.911 SO libspdk_fsdev.so.2.0 00:04:39.911 SYMLINK libspdk_fsdev.so 00:04:39.911 CC lib/event/app.o 00:04:39.911 CC lib/event/reactor.o 00:04:39.911 CC lib/event/log_rpc.o 00:04:39.911 CC lib/event/app_rpc.o 00:04:39.911 CC lib/event/scheduler_static.o 00:04:39.911 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:40.168 LIB libspdk_virtio.a 00:04:40.168 SO libspdk_virtio.so.7.0 00:04:40.168 SYMLINK libspdk_virtio.so 00:04:40.425 LIB libspdk_nvme.a 00:04:40.425 LIB libspdk_event.a 00:04:40.425 SO libspdk_event.so.14.0 00:04:40.425 SO libspdk_nvme.so.15.0 00:04:40.425 SYMLINK libspdk_event.so 00:04:40.683 LIB libspdk_fuse_dispatcher.a 00:04:40.683 SYMLINK libspdk_nvme.so 00:04:40.683 SO libspdk_fuse_dispatcher.so.1.0 00:04:40.683 SYMLINK libspdk_fuse_dispatcher.so 00:04:41.617 LIB libspdk_blob.a 00:04:41.877 SO libspdk_blob.so.11.0 00:04:41.877 SYMLINK libspdk_blob.so 00:04:42.136 CC lib/blobfs/blobfs.o 00:04:42.136 CC lib/blobfs/tree.o 00:04:42.136 CC lib/lvol/lvol.o 00:04:42.394 LIB libspdk_bdev.a 00:04:42.394 SO libspdk_bdev.so.17.0 00:04:42.394 SYMLINK libspdk_bdev.so 00:04:42.652 CC lib/ftl/ftl_core.o 00:04:42.652 CC lib/ftl/ftl_init.o 00:04:42.652 CC lib/ftl/ftl_debug.o 00:04:42.652 CC lib/nbd/nbd.o 00:04:42.652 CC lib/ftl/ftl_layout.o 00:04:42.652 CC lib/ublk/ublk.o 00:04:42.652 CC lib/scsi/dev.o 00:04:42.652 CC lib/nvmf/ctrlr.o 00:04:42.910 CC lib/ftl/ftl_io.o 00:04:42.910 CC lib/nvmf/ctrlr_discovery.o 00:04:42.910 CC lib/scsi/lun.o 00:04:42.910 LIB libspdk_blobfs.a 00:04:42.910 SO libspdk_blobfs.so.10.0 00:04:42.910 CC lib/scsi/port.o 00:04:42.910 SYMLINK libspdk_blobfs.so 00:04:42.910 CC lib/scsi/scsi.o 00:04:42.910 CC lib/scsi/scsi_bdev.o 00:04:42.910 LIB libspdk_lvol.a 00:04:43.167 SO libspdk_lvol.so.10.0 00:04:43.167 CC lib/nbd/nbd_rpc.o 00:04:43.167 CC lib/ftl/ftl_sb.o 00:04:43.167 CC lib/ftl/ftl_l2p.o 00:04:43.167 SYMLINK libspdk_lvol.so 00:04:43.167 CC lib/ftl/ftl_l2p_flat.o 00:04:43.167 CC lib/ftl/ftl_nv_cache.o 00:04:43.167 CC lib/ftl/ftl_band.o 00:04:43.167 CC lib/ublk/ublk_rpc.o 00:04:43.167 CC lib/ftl/ftl_band_ops.o 00:04:43.167 LIB libspdk_nbd.a 00:04:43.167 CC lib/nvmf/ctrlr_bdev.o 00:04:43.167 SO libspdk_nbd.so.7.0 00:04:43.425 CC lib/nvmf/subsystem.o 00:04:43.425 SYMLINK libspdk_nbd.so 00:04:43.425 CC lib/scsi/scsi_pr.o 00:04:43.425 LIB libspdk_ublk.a 00:04:43.425 SO libspdk_ublk.so.3.0 00:04:43.425 CC lib/scsi/scsi_rpc.o 00:04:43.425 SYMLINK libspdk_ublk.so 00:04:43.425 CC lib/scsi/task.o 00:04:43.425 CC lib/ftl/ftl_writer.o 00:04:43.425 CC lib/nvmf/nvmf.o 00:04:43.425 CC lib/nvmf/nvmf_rpc.o 00:04:43.425 CC lib/ftl/ftl_rq.o 00:04:43.683 CC lib/ftl/ftl_reloc.o 00:04:43.683 LIB libspdk_scsi.a 00:04:43.683 SO libspdk_scsi.so.9.0 00:04:43.683 CC lib/nvmf/transport.o 00:04:43.683 CC lib/nvmf/tcp.o 00:04:43.683 SYMLINK libspdk_scsi.so 00:04:43.683 CC lib/nvmf/stubs.o 00:04:43.941 CC lib/nvmf/mdns_server.o 00:04:43.941 CC lib/ftl/ftl_l2p_cache.o 00:04:44.199 CC lib/nvmf/rdma.o 00:04:44.199 CC lib/nvmf/auth.o 00:04:44.199 CC lib/ftl/ftl_p2l.o 00:04:44.457 CC lib/iscsi/conn.o 00:04:44.457 CC lib/ftl/ftl_p2l_log.o 00:04:44.457 CC lib/ftl/mngt/ftl_mngt.o 00:04:44.457 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:44.457 CC lib/iscsi/init_grp.o 00:04:44.716 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:44.716 CC lib/vhost/vhost.o 00:04:44.716 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:44.716 CC lib/iscsi/iscsi.o 00:04:44.716 CC lib/iscsi/param.o 00:04:44.716 CC lib/iscsi/portal_grp.o 00:04:44.716 CC lib/vhost/vhost_rpc.o 00:04:44.716 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:44.974 CC lib/iscsi/tgt_node.o 00:04:44.974 CC lib/iscsi/iscsi_subsystem.o 00:04:44.974 CC lib/vhost/vhost_scsi.o 00:04:44.974 CC lib/iscsi/iscsi_rpc.o 00:04:45.231 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.231 CC lib/vhost/vhost_blk.o 00:04:45.231 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.489 CC lib/iscsi/task.o 00:04:45.489 CC lib/vhost/rte_vhost_user.o 00:04:45.489 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.489 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.489 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.489 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.489 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.489 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.816 CC lib/ftl/utils/ftl_conf.o 00:04:45.816 CC lib/ftl/utils/ftl_md.o 00:04:45.816 CC lib/ftl/utils/ftl_mempool.o 00:04:45.816 CC lib/ftl/utils/ftl_bitmap.o 00:04:45.816 CC lib/ftl/utils/ftl_property.o 00:04:45.816 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:45.816 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:45.816 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.074 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.074 LIB libspdk_nvmf.a 00:04:46.074 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.074 SO libspdk_nvmf.so.20.0 00:04:46.074 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.074 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.074 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.074 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.074 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.074 LIB libspdk_iscsi.a 00:04:46.074 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.335 SYMLINK libspdk_nvmf.so 00:04:46.335 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.335 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.335 CC lib/ftl/base/ftl_base_dev.o 00:04:46.335 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.335 CC lib/ftl/ftl_trace.o 00:04:46.335 SO libspdk_iscsi.so.8.0 00:04:46.335 LIB libspdk_vhost.a 00:04:46.335 SYMLINK libspdk_iscsi.so 00:04:46.335 SO libspdk_vhost.so.8.0 00:04:46.594 LIB libspdk_ftl.a 00:04:46.594 SYMLINK libspdk_vhost.so 00:04:46.594 SO libspdk_ftl.so.9.0 00:04:46.853 SYMLINK libspdk_ftl.so 00:04:47.111 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.111 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.111 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.111 CC module/accel/error/accel_error.o 00:04:47.111 CC module/blob/bdev/blob_bdev.o 00:04:47.111 CC module/sock/posix/posix.o 00:04:47.111 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.111 CC module/keyring/file/keyring.o 00:04:47.111 CC module/fsdev/aio/fsdev_aio.o 00:04:47.111 CC module/keyring/linux/keyring.o 00:04:47.368 LIB libspdk_env_dpdk_rpc.a 00:04:47.368 SO libspdk_env_dpdk_rpc.so.6.0 00:04:47.368 CC module/keyring/linux/keyring_rpc.o 00:04:47.368 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.368 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:47.368 CC module/keyring/file/keyring_rpc.o 00:04:47.368 LIB libspdk_scheduler_gscheduler.a 00:04:47.368 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.368 SO libspdk_scheduler_gscheduler.so.4.0 00:04:47.368 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:47.368 LIB libspdk_scheduler_dynamic.a 00:04:47.368 CC module/accel/error/accel_error_rpc.o 00:04:47.368 SYMLINK libspdk_scheduler_gscheduler.so 00:04:47.369 SO libspdk_scheduler_dynamic.so.4.0 00:04:47.369 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:47.369 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.369 LIB libspdk_keyring_linux.a 00:04:47.369 LIB libspdk_keyring_file.a 00:04:47.369 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.628 LIB libspdk_blob_bdev.a 00:04:47.628 SO libspdk_keyring_linux.so.1.0 00:04:47.628 SO libspdk_keyring_file.so.2.0 00:04:47.628 SO libspdk_blob_bdev.so.11.0 00:04:47.628 LIB libspdk_accel_error.a 00:04:47.628 SYMLINK libspdk_keyring_linux.so 00:04:47.628 SO libspdk_accel_error.so.2.0 00:04:47.628 SYMLINK libspdk_keyring_file.so 00:04:47.628 SYMLINK libspdk_blob_bdev.so 00:04:47.628 CC module/accel/ioat/accel_ioat.o 00:04:47.628 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.628 SYMLINK libspdk_accel_error.so 00:04:47.628 CC module/accel/dsa/accel_dsa.o 00:04:47.628 CC module/accel/dsa/accel_dsa_rpc.o 00:04:47.628 CC module/accel/iaa/accel_iaa.o 00:04:47.628 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.887 LIB libspdk_accel_ioat.a 00:04:47.887 SO libspdk_accel_ioat.so.6.0 00:04:47.887 CC module/bdev/delay/vbdev_delay.o 00:04:47.887 CC module/blobfs/bdev/blobfs_bdev.o 00:04:47.887 LIB libspdk_accel_iaa.a 00:04:47.887 CC module/bdev/error/vbdev_error.o 00:04:47.887 SYMLINK libspdk_accel_ioat.so 00:04:47.887 CC module/bdev/error/vbdev_error_rpc.o 00:04:47.887 SO libspdk_accel_iaa.so.3.0 00:04:47.887 LIB libspdk_fsdev_aio.a 00:04:47.887 LIB libspdk_accel_dsa.a 00:04:47.887 CC module/bdev/lvol/vbdev_lvol.o 00:04:47.887 CC module/bdev/gpt/gpt.o 00:04:47.887 SYMLINK libspdk_accel_iaa.so 00:04:47.887 CC module/bdev/gpt/vbdev_gpt.o 00:04:47.887 SO libspdk_fsdev_aio.so.1.0 00:04:47.887 SO libspdk_accel_dsa.so.5.0 00:04:47.887 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:47.887 LIB libspdk_sock_posix.a 00:04:48.145 SO libspdk_sock_posix.so.6.0 00:04:48.145 SYMLINK libspdk_fsdev_aio.so 00:04:48.145 SYMLINK libspdk_accel_dsa.so 00:04:48.145 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.145 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.145 SYMLINK libspdk_sock_posix.so 00:04:48.145 LIB libspdk_blobfs_bdev.a 00:04:48.145 LIB libspdk_bdev_error.a 00:04:48.145 SO libspdk_blobfs_bdev.so.6.0 00:04:48.145 CC module/bdev/malloc/bdev_malloc.o 00:04:48.145 SO libspdk_bdev_error.so.6.0 00:04:48.145 LIB libspdk_bdev_delay.a 00:04:48.145 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:48.145 LIB libspdk_bdev_gpt.a 00:04:48.145 SYMLINK libspdk_blobfs_bdev.so 00:04:48.145 SYMLINK libspdk_bdev_error.so 00:04:48.145 CC module/bdev/null/bdev_null.o 00:04:48.145 SO libspdk_bdev_delay.so.6.0 00:04:48.145 SO libspdk_bdev_gpt.so.6.0 00:04:48.145 CC module/bdev/nvme/bdev_nvme.o 00:04:48.405 SYMLINK libspdk_bdev_delay.so 00:04:48.405 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.405 SYMLINK libspdk_bdev_gpt.so 00:04:48.405 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.405 CC module/bdev/raid/bdev_raid.o 00:04:48.405 LIB libspdk_bdev_lvol.a 00:04:48.405 CC module/bdev/split/vbdev_split.o 00:04:48.405 SO libspdk_bdev_lvol.so.6.0 00:04:48.405 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.663 LIB libspdk_bdev_malloc.a 00:04:48.663 SYMLINK libspdk_bdev_lvol.so 00:04:48.663 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.663 SO libspdk_bdev_malloc.so.6.0 00:04:48.663 CC module/bdev/null/bdev_null_rpc.o 00:04:48.663 CC module/bdev/xnvme/bdev_xnvme.o 00:04:48.663 SYMLINK libspdk_bdev_malloc.so 00:04:48.663 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.663 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.663 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.663 LIB libspdk_bdev_null.a 00:04:48.663 SO libspdk_bdev_null.so.6.0 00:04:48.663 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.921 CC module/bdev/raid/raid0.o 00:04:48.921 LIB libspdk_bdev_passthru.a 00:04:48.921 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:48.921 SYMLINK libspdk_bdev_null.so 00:04:48.921 CC module/bdev/raid/raid1.o 00:04:48.921 SO libspdk_bdev_passthru.so.6.0 00:04:48.921 LIB libspdk_bdev_split.a 00:04:48.921 LIB libspdk_bdev_zone_block.a 00:04:48.921 SO libspdk_bdev_split.so.6.0 00:04:48.921 SO libspdk_bdev_zone_block.so.6.0 00:04:48.921 SYMLINK libspdk_bdev_passthru.so 00:04:48.921 LIB libspdk_bdev_xnvme.a 00:04:48.921 SYMLINK libspdk_bdev_split.so 00:04:48.921 CC module/bdev/raid/concat.o 00:04:48.921 SO libspdk_bdev_xnvme.so.3.0 00:04:48.921 SYMLINK libspdk_bdev_zone_block.so 00:04:49.179 CC module/bdev/nvme/nvme_rpc.o 00:04:49.179 SYMLINK libspdk_bdev_xnvme.so 00:04:49.179 CC module/bdev/nvme/bdev_mdns_client.o 00:04:49.179 CC module/bdev/aio/bdev_aio.o 00:04:49.179 CC module/bdev/nvme/vbdev_opal.o 00:04:49.179 CC module/bdev/ftl/bdev_ftl.o 00:04:49.179 CC module/bdev/iscsi/bdev_iscsi.o 00:04:49.179 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:49.179 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:49.179 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:49.179 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:49.437 CC module/bdev/aio/bdev_aio_rpc.o 00:04:49.437 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:49.437 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:49.437 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:49.437 LIB libspdk_bdev_aio.a 00:04:49.437 SO libspdk_bdev_aio.so.6.0 00:04:49.437 LIB libspdk_bdev_ftl.a 00:04:49.437 LIB libspdk_bdev_raid.a 00:04:49.437 LIB libspdk_bdev_iscsi.a 00:04:49.437 SYMLINK libspdk_bdev_aio.so 00:04:49.437 SO libspdk_bdev_iscsi.so.6.0 00:04:49.437 SO libspdk_bdev_ftl.so.6.0 00:04:49.437 SO libspdk_bdev_raid.so.6.0 00:04:49.696 SYMLINK libspdk_bdev_iscsi.so 00:04:49.696 SYMLINK libspdk_bdev_ftl.so 00:04:49.696 SYMLINK libspdk_bdev_raid.so 00:04:49.696 LIB libspdk_bdev_virtio.a 00:04:49.696 SO libspdk_bdev_virtio.so.6.0 00:04:49.957 SYMLINK libspdk_bdev_virtio.so 00:04:51.344 LIB libspdk_bdev_nvme.a 00:04:51.344 SO libspdk_bdev_nvme.so.7.1 00:04:51.344 SYMLINK libspdk_bdev_nvme.so 00:04:51.603 CC module/event/subsystems/sock/sock.o 00:04:51.603 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:51.603 CC module/event/subsystems/scheduler/scheduler.o 00:04:51.603 CC module/event/subsystems/fsdev/fsdev.o 00:04:51.603 CC module/event/subsystems/vmd/vmd.o 00:04:51.603 CC module/event/subsystems/iobuf/iobuf.o 00:04:51.603 CC module/event/subsystems/keyring/keyring.o 00:04:51.603 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:51.603 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:51.862 LIB libspdk_event_keyring.a 00:04:51.862 LIB libspdk_event_vhost_blk.a 00:04:51.862 LIB libspdk_event_sock.a 00:04:51.862 LIB libspdk_event_scheduler.a 00:04:51.862 SO libspdk_event_keyring.so.1.0 00:04:51.862 SO libspdk_event_vhost_blk.so.3.0 00:04:51.862 SO libspdk_event_sock.so.5.0 00:04:51.862 LIB libspdk_event_fsdev.a 00:04:51.862 SO libspdk_event_scheduler.so.4.0 00:04:51.862 LIB libspdk_event_iobuf.a 00:04:51.862 LIB libspdk_event_vmd.a 00:04:51.862 SO libspdk_event_fsdev.so.1.0 00:04:51.862 SYMLINK libspdk_event_keyring.so 00:04:51.862 SO libspdk_event_vmd.so.6.0 00:04:51.862 SO libspdk_event_iobuf.so.3.0 00:04:51.862 SYMLINK libspdk_event_sock.so 00:04:51.862 SYMLINK libspdk_event_vhost_blk.so 00:04:51.862 SYMLINK libspdk_event_scheduler.so 00:04:51.862 SYMLINK libspdk_event_vmd.so 00:04:51.862 SYMLINK libspdk_event_fsdev.so 00:04:51.862 SYMLINK libspdk_event_iobuf.so 00:04:52.124 CC module/event/subsystems/accel/accel.o 00:04:52.385 LIB libspdk_event_accel.a 00:04:52.385 SO libspdk_event_accel.so.6.0 00:04:52.385 SYMLINK libspdk_event_accel.so 00:04:52.718 CC module/event/subsystems/bdev/bdev.o 00:04:52.718 LIB libspdk_event_bdev.a 00:04:52.718 SO libspdk_event_bdev.so.6.0 00:04:52.718 SYMLINK libspdk_event_bdev.so 00:04:52.977 CC module/event/subsystems/scsi/scsi.o 00:04:52.977 CC module/event/subsystems/ublk/ublk.o 00:04:52.977 CC module/event/subsystems/nbd/nbd.o 00:04:52.977 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.977 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.977 LIB libspdk_event_nbd.a 00:04:52.977 LIB libspdk_event_ublk.a 00:04:53.237 LIB libspdk_event_scsi.a 00:04:53.237 SO libspdk_event_nbd.so.6.0 00:04:53.237 SO libspdk_event_ublk.so.3.0 00:04:53.237 SO libspdk_event_scsi.so.6.0 00:04:53.237 SYMLINK libspdk_event_nbd.so 00:04:53.237 SYMLINK libspdk_event_ublk.so 00:04:53.237 SYMLINK libspdk_event_scsi.so 00:04:53.237 LIB libspdk_event_nvmf.a 00:04:53.237 SO libspdk_event_nvmf.so.6.0 00:04:53.237 SYMLINK libspdk_event_nvmf.so 00:04:53.497 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:53.497 CC module/event/subsystems/iscsi/iscsi.o 00:04:53.497 LIB libspdk_event_vhost_scsi.a 00:04:53.497 SO libspdk_event_vhost_scsi.so.3.0 00:04:53.497 SYMLINK libspdk_event_vhost_scsi.so 00:04:53.497 LIB libspdk_event_iscsi.a 00:04:53.497 SO libspdk_event_iscsi.so.6.0 00:04:53.755 SYMLINK libspdk_event_iscsi.so 00:04:53.755 SO libspdk.so.6.0 00:04:53.755 SYMLINK libspdk.so 00:04:54.013 TEST_HEADER include/spdk/accel.h 00:04:54.013 TEST_HEADER include/spdk/accel_module.h 00:04:54.013 TEST_HEADER include/spdk/assert.h 00:04:54.014 CC test/rpc_client/rpc_client_test.o 00:04:54.014 CXX app/trace/trace.o 00:04:54.014 TEST_HEADER include/spdk/barrier.h 00:04:54.014 TEST_HEADER include/spdk/base64.h 00:04:54.014 TEST_HEADER include/spdk/bdev.h 00:04:54.014 TEST_HEADER include/spdk/bdev_module.h 00:04:54.014 TEST_HEADER include/spdk/bdev_zone.h 00:04:54.014 CC app/trace_record/trace_record.o 00:04:54.014 TEST_HEADER include/spdk/bit_array.h 00:04:54.014 TEST_HEADER include/spdk/bit_pool.h 00:04:54.014 TEST_HEADER include/spdk/blob_bdev.h 00:04:54.014 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:54.014 TEST_HEADER include/spdk/blobfs.h 00:04:54.014 TEST_HEADER include/spdk/blob.h 00:04:54.014 TEST_HEADER include/spdk/conf.h 00:04:54.014 TEST_HEADER include/spdk/config.h 00:04:54.014 TEST_HEADER include/spdk/cpuset.h 00:04:54.014 TEST_HEADER include/spdk/crc16.h 00:04:54.014 TEST_HEADER include/spdk/crc32.h 00:04:54.014 TEST_HEADER include/spdk/crc64.h 00:04:54.014 TEST_HEADER include/spdk/dif.h 00:04:54.014 TEST_HEADER include/spdk/dma.h 00:04:54.014 TEST_HEADER include/spdk/endian.h 00:04:54.014 CC app/nvmf_tgt/nvmf_main.o 00:04:54.014 TEST_HEADER include/spdk/env_dpdk.h 00:04:54.014 TEST_HEADER include/spdk/env.h 00:04:54.014 TEST_HEADER include/spdk/event.h 00:04:54.014 TEST_HEADER include/spdk/fd_group.h 00:04:54.014 TEST_HEADER include/spdk/fd.h 00:04:54.014 TEST_HEADER include/spdk/file.h 00:04:54.014 TEST_HEADER include/spdk/fsdev.h 00:04:54.014 TEST_HEADER include/spdk/fsdev_module.h 00:04:54.014 TEST_HEADER include/spdk/ftl.h 00:04:54.014 CC test/thread/poller_perf/poller_perf.o 00:04:54.014 CC examples/util/zipf/zipf.o 00:04:54.014 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:54.014 TEST_HEADER include/spdk/gpt_spec.h 00:04:54.014 TEST_HEADER include/spdk/hexlify.h 00:04:54.014 TEST_HEADER include/spdk/histogram_data.h 00:04:54.014 TEST_HEADER include/spdk/idxd.h 00:04:54.014 TEST_HEADER include/spdk/idxd_spec.h 00:04:54.014 TEST_HEADER include/spdk/init.h 00:04:54.014 TEST_HEADER include/spdk/ioat.h 00:04:54.014 TEST_HEADER include/spdk/ioat_spec.h 00:04:54.014 TEST_HEADER include/spdk/iscsi_spec.h 00:04:54.014 TEST_HEADER include/spdk/json.h 00:04:54.014 TEST_HEADER include/spdk/jsonrpc.h 00:04:54.014 TEST_HEADER include/spdk/keyring.h 00:04:54.014 TEST_HEADER include/spdk/keyring_module.h 00:04:54.014 TEST_HEADER include/spdk/likely.h 00:04:54.014 TEST_HEADER include/spdk/log.h 00:04:54.014 TEST_HEADER include/spdk/lvol.h 00:04:54.014 TEST_HEADER include/spdk/md5.h 00:04:54.014 CC test/app/bdev_svc/bdev_svc.o 00:04:54.014 TEST_HEADER include/spdk/memory.h 00:04:54.014 TEST_HEADER include/spdk/mmio.h 00:04:54.014 TEST_HEADER include/spdk/nbd.h 00:04:54.014 TEST_HEADER include/spdk/net.h 00:04:54.014 TEST_HEADER include/spdk/notify.h 00:04:54.014 CC test/dma/test_dma/test_dma.o 00:04:54.014 TEST_HEADER include/spdk/nvme.h 00:04:54.014 TEST_HEADER include/spdk/nvme_intel.h 00:04:54.014 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:54.014 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:54.014 TEST_HEADER include/spdk/nvme_spec.h 00:04:54.014 TEST_HEADER include/spdk/nvme_zns.h 00:04:54.014 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:54.014 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.014 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:54.014 TEST_HEADER include/spdk/nvmf.h 00:04:54.014 TEST_HEADER include/spdk/nvmf_spec.h 00:04:54.014 TEST_HEADER include/spdk/nvmf_transport.h 00:04:54.014 TEST_HEADER include/spdk/opal.h 00:04:54.014 TEST_HEADER include/spdk/opal_spec.h 00:04:54.014 TEST_HEADER include/spdk/pci_ids.h 00:04:54.014 TEST_HEADER include/spdk/pipe.h 00:04:54.014 TEST_HEADER include/spdk/queue.h 00:04:54.014 TEST_HEADER include/spdk/reduce.h 00:04:54.014 TEST_HEADER include/spdk/rpc.h 00:04:54.014 TEST_HEADER include/spdk/scheduler.h 00:04:54.014 TEST_HEADER include/spdk/scsi.h 00:04:54.014 TEST_HEADER include/spdk/scsi_spec.h 00:04:54.014 TEST_HEADER include/spdk/sock.h 00:04:54.014 TEST_HEADER include/spdk/stdinc.h 00:04:54.014 TEST_HEADER include/spdk/string.h 00:04:54.014 TEST_HEADER include/spdk/thread.h 00:04:54.014 TEST_HEADER include/spdk/trace.h 00:04:54.014 TEST_HEADER include/spdk/trace_parser.h 00:04:54.014 TEST_HEADER include/spdk/tree.h 00:04:54.014 TEST_HEADER include/spdk/ublk.h 00:04:54.014 TEST_HEADER include/spdk/util.h 00:04:54.014 TEST_HEADER include/spdk/uuid.h 00:04:54.014 LINK rpc_client_test 00:04:54.014 TEST_HEADER include/spdk/version.h 00:04:54.014 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:54.014 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:54.014 TEST_HEADER include/spdk/vhost.h 00:04:54.014 TEST_HEADER include/spdk/vmd.h 00:04:54.014 TEST_HEADER include/spdk/xor.h 00:04:54.272 TEST_HEADER include/spdk/zipf.h 00:04:54.272 CXX test/cpp_headers/accel.o 00:04:54.272 LINK poller_perf 00:04:54.272 LINK nvmf_tgt 00:04:54.272 LINK zipf 00:04:54.272 LINK bdev_svc 00:04:54.272 LINK spdk_trace_record 00:04:54.272 CXX test/cpp_headers/accel_module.o 00:04:54.272 CXX test/cpp_headers/assert.o 00:04:54.272 LINK spdk_trace 00:04:54.272 CC test/env/vtophys/vtophys.o 00:04:54.272 CXX test/cpp_headers/barrier.o 00:04:54.529 CXX test/cpp_headers/base64.o 00:04:54.529 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:54.529 LINK vtophys 00:04:54.529 CC examples/ioat/perf/perf.o 00:04:54.529 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.529 LINK test_dma 00:04:54.529 CXX test/cpp_headers/bdev.o 00:04:54.529 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:54.529 LINK mem_callbacks 00:04:54.529 CC app/iscsi_tgt/iscsi_tgt.o 00:04:54.529 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:54.789 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:54.789 CXX test/cpp_headers/bdev_module.o 00:04:54.789 LINK env_dpdk_post_init 00:04:54.789 CXX test/cpp_headers/bdev_zone.o 00:04:54.789 LINK ioat_perf 00:04:54.789 LINK iscsi_tgt 00:04:54.789 CC app/spdk_lspci/spdk_lspci.o 00:04:54.789 CC app/spdk_tgt/spdk_tgt.o 00:04:54.789 LINK nvme_fuzz 00:04:55.050 CXX test/cpp_headers/bit_array.o 00:04:55.050 CC test/env/memory/memory_ut.o 00:04:55.050 LINK spdk_lspci 00:04:55.050 CC examples/ioat/verify/verify.o 00:04:55.050 CC examples/vmd/lsvmd/lsvmd.o 00:04:55.050 LINK spdk_tgt 00:04:55.050 LINK vhost_fuzz 00:04:55.050 CXX test/cpp_headers/bit_pool.o 00:04:55.050 CXX test/cpp_headers/blob_bdev.o 00:04:55.050 CC examples/idxd/perf/perf.o 00:04:55.312 LINK lsvmd 00:04:55.312 LINK verify 00:04:55.312 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:55.312 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.312 CC app/spdk_nvme_perf/perf.o 00:04:55.312 LINK interrupt_tgt 00:04:55.573 CC examples/sock/hello_world/hello_sock.o 00:04:55.573 CC examples/vmd/led/led.o 00:04:55.573 CC examples/thread/thread/thread_ex.o 00:04:55.573 CC app/spdk_nvme_identify/identify.o 00:04:55.573 CXX test/cpp_headers/blobfs.o 00:04:55.573 LINK idxd_perf 00:04:55.573 LINK led 00:04:55.573 CXX test/cpp_headers/blob.o 00:04:55.573 CXX test/cpp_headers/conf.o 00:04:55.833 LINK hello_sock 00:04:55.833 LINK thread 00:04:55.833 CXX test/cpp_headers/config.o 00:04:55.833 CC app/spdk_nvme_discover/discovery_aer.o 00:04:55.833 CXX test/cpp_headers/cpuset.o 00:04:55.833 CC app/spdk_top/spdk_top.o 00:04:55.833 CXX test/cpp_headers/crc16.o 00:04:56.092 CC app/vhost/vhost.o 00:04:56.092 CXX test/cpp_headers/crc32.o 00:04:56.092 LINK spdk_nvme_discover 00:04:56.092 CC examples/nvme/hello_world/hello_world.o 00:04:56.092 LINK memory_ut 00:04:56.092 LINK vhost 00:04:56.092 CXX test/cpp_headers/crc64.o 00:04:56.351 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:56.351 CXX test/cpp_headers/dif.o 00:04:56.351 LINK hello_world 00:04:56.351 LINK spdk_nvme_perf 00:04:56.351 CXX test/cpp_headers/dma.o 00:04:56.351 LINK iscsi_fuzz 00:04:56.627 CC test/env/pci/pci_ut.o 00:04:56.627 CC examples/nvme/reconnect/reconnect.o 00:04:56.627 LINK hello_fsdev 00:04:56.627 CXX test/cpp_headers/endian.o 00:04:56.627 LINK spdk_nvme_identify 00:04:56.627 CC app/spdk_dd/spdk_dd.o 00:04:56.627 CXX test/cpp_headers/env_dpdk.o 00:04:56.627 CXX test/cpp_headers/env.o 00:04:56.627 CC app/fio/nvme/fio_plugin.o 00:04:56.887 CC test/event/event_perf/event_perf.o 00:04:56.887 CC test/app/histogram_perf/histogram_perf.o 00:04:56.887 CC app/fio/bdev/fio_plugin.o 00:04:56.887 CXX test/cpp_headers/event.o 00:04:56.887 LINK reconnect 00:04:56.887 LINK pci_ut 00:04:56.887 LINK spdk_top 00:04:56.887 LINK event_perf 00:04:56.887 LINK histogram_perf 00:04:57.148 LINK spdk_dd 00:04:57.148 CXX test/cpp_headers/fd_group.o 00:04:57.148 CC examples/accel/perf/accel_perf.o 00:04:57.148 CC test/app/jsoncat/jsoncat.o 00:04:57.148 CC test/event/reactor/reactor.o 00:04:57.148 CC test/app/stub/stub.o 00:04:57.148 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:57.148 CXX test/cpp_headers/fd.o 00:04:57.409 LINK reactor 00:04:57.409 LINK jsoncat 00:04:57.409 CC test/nvme/aer/aer.o 00:04:57.409 LINK spdk_nvme 00:04:57.409 CXX test/cpp_headers/file.o 00:04:57.409 LINK stub 00:04:57.409 LINK spdk_bdev 00:04:57.409 CC test/accel/dif/dif.o 00:04:57.409 CC test/event/reactor_perf/reactor_perf.o 00:04:57.752 CXX test/cpp_headers/fsdev.o 00:04:57.752 CC test/event/app_repeat/app_repeat.o 00:04:57.752 CC test/nvme/reset/reset.o 00:04:57.752 LINK aer 00:04:57.752 LINK accel_perf 00:04:57.752 LINK reactor_perf 00:04:57.752 CC test/event/scheduler/scheduler.o 00:04:57.752 CXX test/cpp_headers/fsdev_module.o 00:04:57.752 LINK app_repeat 00:04:57.752 CC test/blobfs/mkfs/mkfs.o 00:04:57.752 LINK nvme_manage 00:04:57.752 LINK reset 00:04:57.752 CXX test/cpp_headers/ftl.o 00:04:58.014 CC test/nvme/sgl/sgl.o 00:04:58.014 CC test/nvme/e2edp/nvme_dp.o 00:04:58.014 LINK mkfs 00:04:58.014 LINK scheduler 00:04:58.014 CC examples/nvme/arbitration/arbitration.o 00:04:58.014 CC examples/nvme/hotplug/hotplug.o 00:04:58.014 CXX test/cpp_headers/fuse_dispatcher.o 00:04:58.014 CC test/lvol/esnap/esnap.o 00:04:58.014 CC test/nvme/overhead/overhead.o 00:04:58.014 CXX test/cpp_headers/gpt_spec.o 00:04:58.014 CXX test/cpp_headers/hexlify.o 00:04:58.276 LINK dif 00:04:58.276 LINK nvme_dp 00:04:58.276 LINK sgl 00:04:58.276 LINK hotplug 00:04:58.276 CXX test/cpp_headers/histogram_data.o 00:04:58.276 CC test/nvme/err_injection/err_injection.o 00:04:58.276 LINK arbitration 00:04:58.276 CXX test/cpp_headers/idxd.o 00:04:58.276 CXX test/cpp_headers/idxd_spec.o 00:04:58.276 CC test/nvme/startup/startup.o 00:04:58.276 CXX test/cpp_headers/init.o 00:04:58.276 LINK overhead 00:04:58.534 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:58.534 LINK err_injection 00:04:58.534 CXX test/cpp_headers/ioat.o 00:04:58.534 CXX test/cpp_headers/ioat_spec.o 00:04:58.534 LINK startup 00:04:58.534 CC examples/nvme/abort/abort.o 00:04:58.534 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:58.534 CC test/nvme/reserve/reserve.o 00:04:58.534 CXX test/cpp_headers/iscsi_spec.o 00:04:58.534 LINK cmb_copy 00:04:58.534 CC test/bdev/bdevio/bdevio.o 00:04:58.793 CC test/nvme/simple_copy/simple_copy.o 00:04:58.793 CC test/nvme/connect_stress/connect_stress.o 00:04:58.793 LINK pmr_persistence 00:04:58.793 CC test/nvme/boot_partition/boot_partition.o 00:04:58.793 CXX test/cpp_headers/json.o 00:04:58.793 LINK reserve 00:04:58.793 CXX test/cpp_headers/jsonrpc.o 00:04:58.793 LINK connect_stress 00:04:58.793 LINK boot_partition 00:04:58.793 LINK abort 00:04:58.793 LINK simple_copy 00:04:58.793 CC test/nvme/compliance/nvme_compliance.o 00:04:59.052 CC examples/blob/hello_world/hello_blob.o 00:04:59.052 CC test/nvme/fused_ordering/fused_ordering.o 00:04:59.052 CXX test/cpp_headers/keyring.o 00:04:59.052 LINK bdevio 00:04:59.052 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:59.052 CC test/nvme/fdp/fdp.o 00:04:59.052 CC test/nvme/cuse/cuse.o 00:04:59.052 CXX test/cpp_headers/keyring_module.o 00:04:59.052 LINK hello_blob 00:04:59.310 LINK fused_ordering 00:04:59.310 CC examples/bdev/hello_world/hello_bdev.o 00:04:59.310 LINK nvme_compliance 00:04:59.310 LINK doorbell_aers 00:04:59.310 CXX test/cpp_headers/likely.o 00:04:59.310 CC examples/bdev/bdevperf/bdevperf.o 00:04:59.310 CXX test/cpp_headers/log.o 00:04:59.310 LINK fdp 00:04:59.310 CXX test/cpp_headers/lvol.o 00:04:59.310 CXX test/cpp_headers/md5.o 00:04:59.310 CC examples/blob/cli/blobcli.o 00:04:59.310 CXX test/cpp_headers/memory.o 00:04:59.567 LINK hello_bdev 00:04:59.567 CXX test/cpp_headers/mmio.o 00:04:59.567 CXX test/cpp_headers/nbd.o 00:04:59.567 CXX test/cpp_headers/net.o 00:04:59.567 CXX test/cpp_headers/notify.o 00:04:59.567 CXX test/cpp_headers/nvme.o 00:04:59.567 CXX test/cpp_headers/nvme_intel.o 00:04:59.567 CXX test/cpp_headers/nvme_ocssd.o 00:04:59.567 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:59.567 CXX test/cpp_headers/nvme_spec.o 00:04:59.567 CXX test/cpp_headers/nvme_zns.o 00:04:59.826 CXX test/cpp_headers/nvmf_cmd.o 00:04:59.826 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:59.826 CXX test/cpp_headers/nvmf.o 00:04:59.826 CXX test/cpp_headers/nvmf_spec.o 00:04:59.826 CXX test/cpp_headers/nvmf_transport.o 00:04:59.826 LINK blobcli 00:04:59.826 CXX test/cpp_headers/opal.o 00:04:59.826 CXX test/cpp_headers/opal_spec.o 00:04:59.826 CXX test/cpp_headers/pci_ids.o 00:04:59.826 CXX test/cpp_headers/pipe.o 00:04:59.826 CXX test/cpp_headers/queue.o 00:04:59.826 CXX test/cpp_headers/reduce.o 00:04:59.826 CXX test/cpp_headers/rpc.o 00:05:00.083 CXX test/cpp_headers/scheduler.o 00:05:00.083 CXX test/cpp_headers/scsi.o 00:05:00.083 CXX test/cpp_headers/scsi_spec.o 00:05:00.083 CXX test/cpp_headers/sock.o 00:05:00.083 CXX test/cpp_headers/stdinc.o 00:05:00.083 CXX test/cpp_headers/string.o 00:05:00.083 CXX test/cpp_headers/thread.o 00:05:00.083 CXX test/cpp_headers/trace.o 00:05:00.083 CXX test/cpp_headers/trace_parser.o 00:05:00.083 CXX test/cpp_headers/tree.o 00:05:00.083 LINK bdevperf 00:05:00.083 CXX test/cpp_headers/ublk.o 00:05:00.083 CXX test/cpp_headers/util.o 00:05:00.083 CXX test/cpp_headers/uuid.o 00:05:00.083 CXX test/cpp_headers/version.o 00:05:00.344 CXX test/cpp_headers/vfio_user_pci.o 00:05:00.344 CXX test/cpp_headers/vfio_user_spec.o 00:05:00.344 CXX test/cpp_headers/vhost.o 00:05:00.344 CXX test/cpp_headers/vmd.o 00:05:00.344 LINK cuse 00:05:00.344 CXX test/cpp_headers/xor.o 00:05:00.344 CXX test/cpp_headers/zipf.o 00:05:00.605 CC examples/nvmf/nvmf/nvmf.o 00:05:00.866 LINK nvmf 00:05:04.171 LINK esnap 00:05:04.171 00:05:04.171 real 1m8.411s 00:05:04.171 user 5m30.357s 00:05:04.171 sys 0m55.842s 00:05:04.171 13:18:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:04.171 13:18:00 make -- common/autotest_common.sh@10 -- $ set +x 00:05:04.171 ************************************ 00:05:04.171 END TEST make 00:05:04.171 ************************************ 00:05:04.171 13:18:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:04.171 13:18:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:04.171 13:18:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:04.171 13:18:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.171 13:18:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:04.171 13:18:00 -- pm/common@44 -- $ pid=5826 00:05:04.171 13:18:00 -- pm/common@50 -- $ kill -TERM 5826 00:05:04.171 13:18:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.171 13:18:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:04.433 13:18:00 -- pm/common@44 -- $ pid=5827 00:05:04.433 13:18:00 -- pm/common@50 -- $ kill -TERM 5827 00:05:04.433 13:18:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:04.433 13:18:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:04.433 13:18:00 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.433 13:18:00 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.433 13:18:00 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.433 13:18:00 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.433 13:18:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.433 13:18:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.433 13:18:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.433 13:18:00 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.433 13:18:00 -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.433 13:18:00 -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.433 13:18:00 -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.433 13:18:00 -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.433 13:18:00 -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.433 13:18:00 -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.433 13:18:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.433 13:18:00 -- scripts/common.sh@344 -- # case "$op" in 00:05:04.433 13:18:00 -- scripts/common.sh@345 -- # : 1 00:05:04.433 13:18:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.433 13:18:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.433 13:18:00 -- scripts/common.sh@365 -- # decimal 1 00:05:04.433 13:18:00 -- scripts/common.sh@353 -- # local d=1 00:05:04.433 13:18:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.433 13:18:00 -- scripts/common.sh@355 -- # echo 1 00:05:04.433 13:18:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.433 13:18:00 -- scripts/common.sh@366 -- # decimal 2 00:05:04.433 13:18:00 -- scripts/common.sh@353 -- # local d=2 00:05:04.433 13:18:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.433 13:18:00 -- scripts/common.sh@355 -- # echo 2 00:05:04.433 13:18:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.433 13:18:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.433 13:18:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.433 13:18:00 -- scripts/common.sh@368 -- # return 0 00:05:04.433 13:18:00 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.433 13:18:00 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.434 --rc genhtml_branch_coverage=1 00:05:04.434 --rc genhtml_function_coverage=1 00:05:04.434 --rc genhtml_legend=1 00:05:04.434 --rc geninfo_all_blocks=1 00:05:04.434 --rc geninfo_unexecuted_blocks=1 00:05:04.434 00:05:04.434 ' 00:05:04.434 13:18:00 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.434 --rc genhtml_branch_coverage=1 00:05:04.434 --rc genhtml_function_coverage=1 00:05:04.434 --rc genhtml_legend=1 00:05:04.434 --rc geninfo_all_blocks=1 00:05:04.434 --rc geninfo_unexecuted_blocks=1 00:05:04.434 00:05:04.434 ' 00:05:04.434 13:18:00 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.434 --rc genhtml_branch_coverage=1 00:05:04.434 --rc genhtml_function_coverage=1 00:05:04.434 --rc genhtml_legend=1 00:05:04.434 --rc geninfo_all_blocks=1 00:05:04.434 --rc geninfo_unexecuted_blocks=1 00:05:04.434 00:05:04.434 ' 00:05:04.434 13:18:00 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.434 --rc genhtml_branch_coverage=1 00:05:04.434 --rc genhtml_function_coverage=1 00:05:04.434 --rc genhtml_legend=1 00:05:04.434 --rc geninfo_all_blocks=1 00:05:04.434 --rc geninfo_unexecuted_blocks=1 00:05:04.434 00:05:04.434 ' 00:05:04.434 13:18:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.434 13:18:00 -- nvmf/common.sh@7 -- # uname -s 00:05:04.434 13:18:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.434 13:18:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.434 13:18:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.434 13:18:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.434 13:18:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.434 13:18:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.434 13:18:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.434 13:18:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.434 13:18:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.434 13:18:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.434 13:18:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ddb0366-2ef0-47b4-a531-d667894373d3 00:05:04.434 13:18:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=7ddb0366-2ef0-47b4-a531-d667894373d3 00:05:04.434 13:18:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.434 13:18:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.434 13:18:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.434 13:18:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.434 13:18:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.434 13:18:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.434 13:18:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.434 13:18:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.434 13:18:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.434 13:18:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.434 13:18:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.434 13:18:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.434 13:18:00 -- paths/export.sh@5 -- # export PATH 00:05:04.434 13:18:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.434 13:18:00 -- nvmf/common.sh@51 -- # : 0 00:05:04.434 13:18:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.434 13:18:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.434 13:18:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.434 13:18:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.434 13:18:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.434 13:18:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.434 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.434 13:18:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.434 13:18:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.434 13:18:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.434 13:18:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:04.434 13:18:00 -- spdk/autotest.sh@32 -- # uname -s 00:05:04.434 13:18:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:04.434 13:18:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:04.434 13:18:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:04.434 13:18:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:04.434 13:18:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:04.434 13:18:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:04.434 13:18:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:04.434 13:18:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:04.434 13:18:00 -- spdk/autotest.sh@48 -- # udevadm_pid=66650 00:05:04.434 13:18:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:04.434 13:18:00 -- pm/common@17 -- # local monitor 00:05:04.434 13:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.434 13:18:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:04.434 13:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.434 13:18:00 -- pm/common@25 -- # sleep 1 00:05:04.434 13:18:00 -- pm/common@21 -- # date +%s 00:05:04.696 13:18:00 -- pm/common@21 -- # date +%s 00:05:04.696 13:18:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731935880 00:05:04.696 13:18:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731935880 00:05:04.696 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731935880_collect-cpu-load.pm.log 00:05:04.696 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731935880_collect-vmstat.pm.log 00:05:05.639 13:18:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:05.639 13:18:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:05.639 13:18:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.639 13:18:01 -- common/autotest_common.sh@10 -- # set +x 00:05:05.639 13:18:01 -- spdk/autotest.sh@59 -- # create_test_list 00:05:05.639 13:18:01 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:05.639 13:18:01 -- common/autotest_common.sh@10 -- # set +x 00:05:05.639 13:18:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:05.639 13:18:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:05.639 13:18:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:05.639 13:18:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:05.639 13:18:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:05.639 13:18:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:05.639 13:18:01 -- common/autotest_common.sh@1457 -- # uname 00:05:05.639 13:18:01 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:05.639 13:18:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:05.639 13:18:01 -- common/autotest_common.sh@1477 -- # uname 00:05:05.639 13:18:01 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:05.639 13:18:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:05.640 13:18:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:05.640 lcov: LCOV version 1.15 00:05:05.640 13:18:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:20.543 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:20.543 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:35.419 13:18:30 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:35.419 13:18:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.419 13:18:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.419 13:18:30 -- spdk/autotest.sh@78 -- # rm -f 00:05:35.419 13:18:30 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.678 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:35.678 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:35.679 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:35.679 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:35.679 13:18:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:35.679 13:18:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:35.679 13:18:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:35.679 13:18:31 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:35.679 13:18:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:35.679 13:18:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:35.679 13:18:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:35.679 13:18:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:35.679 13:18:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:35.679 13:18:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:35.679 13:18:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:35.679 13:18:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:35.679 13:18:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:35.679 13:18:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:35.679 13:18:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:35.679 13:18:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:35.679 13:18:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:35.679 13:18:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:35.679 13:18:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:35.679 13:18:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.679 13:18:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.679 13:18:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:35.679 13:18:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:35.679 13:18:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:35.938 No valid GPT data, bailing 00:05:35.938 13:18:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:35.938 13:18:31 -- scripts/common.sh@394 -- # pt= 00:05:35.938 13:18:31 -- scripts/common.sh@395 -- # return 1 00:05:35.938 13:18:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:35.938 1+0 records in 00:05:35.938 1+0 records out 00:05:35.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449551 s, 233 MB/s 00:05:35.938 13:18:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.938 13:18:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.938 13:18:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:35.938 13:18:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:35.938 13:18:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:35.938 No valid GPT data, bailing 00:05:35.938 13:18:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:35.938 13:18:31 -- scripts/common.sh@394 -- # pt= 00:05:35.938 13:18:31 -- scripts/common.sh@395 -- # return 1 00:05:35.938 13:18:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:35.938 1+0 records in 00:05:35.938 1+0 records out 00:05:35.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382446 s, 274 MB/s 00:05:35.938 13:18:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.938 13:18:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.938 13:18:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:35.938 13:18:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:35.938 13:18:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:35.938 No valid GPT data, bailing 00:05:35.938 13:18:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:35.938 13:18:31 -- scripts/common.sh@394 -- # pt= 00:05:35.938 13:18:31 -- scripts/common.sh@395 -- # return 1 00:05:35.938 13:18:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:35.938 1+0 records in 00:05:35.938 1+0 records out 00:05:35.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00339729 s, 309 MB/s 00:05:35.938 13:18:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.938 13:18:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.938 13:18:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:35.938 13:18:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:35.938 13:18:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:35.938 No valid GPT data, bailing 00:05:35.938 13:18:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:35.938 13:18:32 -- scripts/common.sh@394 -- # pt= 00:05:35.938 13:18:32 -- scripts/common.sh@395 -- # return 1 00:05:35.938 13:18:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:35.938 1+0 records in 00:05:35.938 1+0 records out 00:05:35.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380235 s, 276 MB/s 00:05:35.938 13:18:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.938 13:18:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.938 13:18:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:35.938 13:18:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:35.938 13:18:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:35.938 No valid GPT data, bailing 00:05:35.938 13:18:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:36.197 13:18:32 -- scripts/common.sh@394 -- # pt= 00:05:36.197 13:18:32 -- scripts/common.sh@395 -- # return 1 00:05:36.197 13:18:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:36.197 1+0 records in 00:05:36.197 1+0 records out 00:05:36.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0099292 s, 106 MB/s 00:05:36.197 13:18:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.197 13:18:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.197 13:18:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:36.197 13:18:32 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:36.197 13:18:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:36.197 No valid GPT data, bailing 00:05:36.197 13:18:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:36.197 13:18:32 -- scripts/common.sh@394 -- # pt= 00:05:36.197 13:18:32 -- scripts/common.sh@395 -- # return 1 00:05:36.197 13:18:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:36.197 1+0 records in 00:05:36.197 1+0 records out 00:05:36.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434787 s, 241 MB/s 00:05:36.197 13:18:32 -- spdk/autotest.sh@105 -- # sync 00:05:36.197 13:18:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:36.197 13:18:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:36.197 13:18:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:38.099 13:18:33 -- spdk/autotest.sh@111 -- # uname -s 00:05:38.099 13:18:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:38.099 13:18:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:38.099 13:18:33 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:38.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.666 Hugepages 00:05:38.666 node hugesize free / total 00:05:38.666 node0 1048576kB 0 / 0 00:05:38.666 node0 2048kB 0 / 0 00:05:38.666 00:05:38.666 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:38.666 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:38.666 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:38.666 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:38.666 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:38.922 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:38.922 13:18:34 -- spdk/autotest.sh@117 -- # uname -s 00:05:38.922 13:18:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:38.922 13:18:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:38.922 13:18:34 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.769 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.769 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.769 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.769 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.769 13:18:35 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:40.703 13:18:36 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:40.703 13:18:36 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:40.703 13:18:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.703 13:18:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:40.703 13:18:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:40.703 13:18:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:40.703 13:18:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.703 13:18:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:40.703 13:18:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:40.961 13:18:36 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:40.961 13:18:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:40.961 13:18:36 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.219 Waiting for block devices as requested 00:05:41.476 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.476 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.476 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.476 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.738 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:46.738 13:18:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.738 13:18:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:46.738 13:18:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:46.738 13:18:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1543 -- # continue 00:05:46.738 13:18:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:46.738 13:18:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1543 -- # continue 00:05:46.738 13:18:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:46.738 13:18:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1543 -- # continue 00:05:46.738 13:18:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:46.738 13:18:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:46.738 13:18:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:46.738 13:18:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:46.738 13:18:42 -- common/autotest_common.sh@1543 -- # continue 00:05:46.738 13:18:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:46.738 13:18:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.738 13:18:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.738 13:18:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:46.739 13:18:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.739 13:18:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.739 13:18:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.629 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.629 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.629 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.629 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.629 13:18:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:47.629 13:18:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.629 13:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:47.629 13:18:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:47.629 13:18:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:47.629 13:18:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:47.629 13:18:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:47.629 13:18:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:47.629 13:18:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:47.629 13:18:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:47.629 13:18:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:47.629 13:18:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:47.629 13:18:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:47.629 13:18:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.629 13:18:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.629 13:18:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:47.887 13:18:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:47.888 13:18:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:47.888 13:18:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:47.888 13:18:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.888 13:18:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:47.888 13:18:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.888 13:18:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:47.888 13:18:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.888 13:18:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:47.888 13:18:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:47.888 13:18:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.888 13:18:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:47.888 13:18:43 -- common/autotest_common.sh@1572 -- # return 0 00:05:47.888 13:18:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:47.888 13:18:43 -- common/autotest_common.sh@1580 -- # return 0 00:05:47.888 13:18:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:47.888 13:18:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:47.888 13:18:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.888 13:18:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.888 13:18:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:47.888 13:18:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.888 13:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:47.888 13:18:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:47.888 13:18:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.888 13:18:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.888 13:18:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.888 13:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:47.888 ************************************ 00:05:47.888 START TEST env 00:05:47.888 ************************************ 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.888 * Looking for test storage... 00:05:47.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.888 13:18:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.888 13:18:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.888 13:18:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.888 13:18:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.888 13:18:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.888 13:18:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.888 13:18:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.888 13:18:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.888 13:18:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.888 13:18:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.888 13:18:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.888 13:18:43 env -- scripts/common.sh@344 -- # case "$op" in 00:05:47.888 13:18:43 env -- scripts/common.sh@345 -- # : 1 00:05:47.888 13:18:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.888 13:18:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.888 13:18:43 env -- scripts/common.sh@365 -- # decimal 1 00:05:47.888 13:18:43 env -- scripts/common.sh@353 -- # local d=1 00:05:47.888 13:18:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.888 13:18:43 env -- scripts/common.sh@355 -- # echo 1 00:05:47.888 13:18:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.888 13:18:43 env -- scripts/common.sh@366 -- # decimal 2 00:05:47.888 13:18:43 env -- scripts/common.sh@353 -- # local d=2 00:05:47.888 13:18:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.888 13:18:43 env -- scripts/common.sh@355 -- # echo 2 00:05:47.888 13:18:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.888 13:18:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.888 13:18:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.888 13:18:43 env -- scripts/common.sh@368 -- # return 0 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.888 --rc genhtml_branch_coverage=1 00:05:47.888 --rc genhtml_function_coverage=1 00:05:47.888 --rc genhtml_legend=1 00:05:47.888 --rc geninfo_all_blocks=1 00:05:47.888 --rc geninfo_unexecuted_blocks=1 00:05:47.888 00:05:47.888 ' 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.888 --rc genhtml_branch_coverage=1 00:05:47.888 --rc genhtml_function_coverage=1 00:05:47.888 --rc genhtml_legend=1 00:05:47.888 --rc geninfo_all_blocks=1 00:05:47.888 --rc geninfo_unexecuted_blocks=1 00:05:47.888 00:05:47.888 ' 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.888 --rc genhtml_branch_coverage=1 00:05:47.888 --rc genhtml_function_coverage=1 00:05:47.888 --rc genhtml_legend=1 00:05:47.888 --rc geninfo_all_blocks=1 00:05:47.888 --rc geninfo_unexecuted_blocks=1 00:05:47.888 00:05:47.888 ' 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.888 --rc genhtml_branch_coverage=1 00:05:47.888 --rc genhtml_function_coverage=1 00:05:47.888 --rc genhtml_legend=1 00:05:47.888 --rc geninfo_all_blocks=1 00:05:47.888 --rc geninfo_unexecuted_blocks=1 00:05:47.888 00:05:47.888 ' 00:05:47.888 13:18:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.888 13:18:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.888 13:18:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.888 ************************************ 00:05:47.888 START TEST env_memory 00:05:47.888 ************************************ 00:05:47.888 13:18:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.888 00:05:47.888 00:05:47.888 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.888 http://cunit.sourceforge.net/ 00:05:47.888 00:05:47.888 00:05:47.888 Suite: memory 00:05:47.888 Test: alloc and free memory map ...[2024-11-18 13:18:43.982230] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.888 passed 00:05:48.146 Test: mem map translation ...[2024-11-18 13:18:44.021097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:48.146 [2024-11-18 13:18:44.021239] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:48.146 [2024-11-18 13:18:44.021416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:48.146 [2024-11-18 13:18:44.021485] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:48.146 passed 00:05:48.146 Test: mem map registration ...[2024-11-18 13:18:44.089711] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:48.146 [2024-11-18 13:18:44.089841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:48.146 passed 00:05:48.146 Test: mem map adjacent registrations ...passed 00:05:48.146 00:05:48.146 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.146 suites 1 1 n/a 0 0 00:05:48.146 tests 4 4 4 0 0 00:05:48.146 asserts 152 152 152 0 n/a 00:05:48.146 00:05:48.146 Elapsed time = 0.233 seconds 00:05:48.146 00:05:48.146 real 0m0.269s 00:05:48.146 user 0m0.240s 00:05:48.146 sys 0m0.020s 00:05:48.146 13:18:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.146 13:18:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:48.146 ************************************ 00:05:48.146 END TEST env_memory 00:05:48.146 ************************************ 00:05:48.146 13:18:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:48.146 13:18:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.146 13:18:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.146 13:18:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.146 ************************************ 00:05:48.146 START TEST env_vtophys 00:05:48.146 ************************************ 00:05:48.146 13:18:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:48.146 EAL: lib.eal log level changed from notice to debug 00:05:48.146 EAL: Detected lcore 0 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 1 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 2 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 3 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 4 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 5 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 6 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 7 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 8 as core 0 on socket 0 00:05:48.147 EAL: Detected lcore 9 as core 0 on socket 0 00:05:48.405 EAL: Maximum logical cores by configuration: 128 00:05:48.405 EAL: Detected CPU lcores: 10 00:05:48.405 EAL: Detected NUMA nodes: 1 00:05:48.405 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:48.405 EAL: Detected shared linkage of DPDK 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:48.405 EAL: Registered [vdev] bus. 00:05:48.405 EAL: bus.vdev log level changed from disabled to notice 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:48.405 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:48.405 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:48.405 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:48.405 EAL: No shared files mode enabled, IPC will be disabled 00:05:48.405 EAL: No shared files mode enabled, IPC is disabled 00:05:48.405 EAL: Selected IOVA mode 'PA' 00:05:48.405 EAL: Probing VFIO support... 00:05:48.405 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.406 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:48.406 EAL: Ask a virtual area of 0x2e000 bytes 00:05:48.406 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:48.406 EAL: Setting up physically contiguous memory... 00:05:48.406 EAL: Setting maximum number of open files to 524288 00:05:48.406 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:48.406 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:48.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.406 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:48.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.406 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:48.406 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:48.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.406 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:48.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.406 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:48.406 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:48.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.406 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:48.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.406 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:48.406 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:48.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.406 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:48.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.406 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:48.406 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:48.406 EAL: Hugepages will be freed exactly as allocated. 00:05:48.406 EAL: No shared files mode enabled, IPC is disabled 00:05:48.406 EAL: No shared files mode enabled, IPC is disabled 00:05:48.406 EAL: TSC frequency is ~2600000 KHz 00:05:48.406 EAL: Main lcore 0 is ready (tid=7f24d8a37a40;cpuset=[0]) 00:05:48.406 EAL: Trying to obtain current memory policy. 00:05:48.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.406 EAL: Restoring previous memory policy: 0 00:05:48.406 EAL: request: mp_malloc_sync 00:05:48.406 EAL: No shared files mode enabled, IPC is disabled 00:05:48.406 EAL: Heap on socket 0 was expanded by 2MB 00:05:48.406 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.406 EAL: No shared files mode enabled, IPC is disabled 00:05:48.406 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:48.406 EAL: Mem event callback 'spdk:(nil)' registered 00:05:48.406 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:48.406 00:05:48.406 00:05:48.406 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.406 http://cunit.sourceforge.net/ 00:05:48.406 00:05:48.406 00:05:48.406 Suite: components_suite 00:05:48.664 Test: vtophys_malloc_test ...passed 00:05:48.664 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:48.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.664 EAL: Restoring previous memory policy: 4 00:05:48.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.664 EAL: request: mp_malloc_sync 00:05:48.664 EAL: No shared files mode enabled, IPC is disabled 00:05:48.664 EAL: Heap on socket 0 was expanded by 4MB 00:05:48.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.664 EAL: request: mp_malloc_sync 00:05:48.664 EAL: No shared files mode enabled, IPC is disabled 00:05:48.664 EAL: Heap on socket 0 was shrunk by 4MB 00:05:48.664 EAL: Trying to obtain current memory policy. 00:05:48.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.664 EAL: Restoring previous memory policy: 4 00:05:48.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.664 EAL: request: mp_malloc_sync 00:05:48.664 EAL: No shared files mode enabled, IPC is disabled 00:05:48.664 EAL: Heap on socket 0 was expanded by 6MB 00:05:48.664 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.664 EAL: request: mp_malloc_sync 00:05:48.664 EAL: No shared files mode enabled, IPC is disabled 00:05:48.664 EAL: Heap on socket 0 was shrunk by 6MB 00:05:48.664 EAL: Trying to obtain current memory policy. 00:05:48.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.664 EAL: Restoring previous memory policy: 4 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was expanded by 10MB 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was shrunk by 10MB 00:05:48.665 EAL: Trying to obtain current memory policy. 00:05:48.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.665 EAL: Restoring previous memory policy: 4 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was expanded by 18MB 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was shrunk by 18MB 00:05:48.665 EAL: Trying to obtain current memory policy. 00:05:48.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.665 EAL: Restoring previous memory policy: 4 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was expanded by 34MB 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was shrunk by 34MB 00:05:48.665 EAL: Trying to obtain current memory policy. 00:05:48.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.665 EAL: Restoring previous memory policy: 4 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was expanded by 66MB 00:05:48.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.665 EAL: request: mp_malloc_sync 00:05:48.665 EAL: No shared files mode enabled, IPC is disabled 00:05:48.665 EAL: Heap on socket 0 was shrunk by 66MB 00:05:48.665 EAL: Trying to obtain current memory policy. 00:05:48.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.923 EAL: Restoring previous memory policy: 4 00:05:48.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.923 EAL: request: mp_malloc_sync 00:05:48.923 EAL: No shared files mode enabled, IPC is disabled 00:05:48.923 EAL: Heap on socket 0 was expanded by 130MB 00:05:48.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.923 EAL: request: mp_malloc_sync 00:05:48.923 EAL: No shared files mode enabled, IPC is disabled 00:05:48.923 EAL: Heap on socket 0 was shrunk by 130MB 00:05:48.923 EAL: Trying to obtain current memory policy. 00:05:48.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.923 EAL: Restoring previous memory policy: 4 00:05:48.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.923 EAL: request: mp_malloc_sync 00:05:48.923 EAL: No shared files mode enabled, IPC is disabled 00:05:48.923 EAL: Heap on socket 0 was expanded by 258MB 00:05:48.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.923 EAL: request: mp_malloc_sync 00:05:48.923 EAL: No shared files mode enabled, IPC is disabled 00:05:48.923 EAL: Heap on socket 0 was shrunk by 258MB 00:05:48.923 EAL: Trying to obtain current memory policy. 00:05:48.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.923 EAL: Restoring previous memory policy: 4 00:05:48.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.923 EAL: request: mp_malloc_sync 00:05:48.923 EAL: No shared files mode enabled, IPC is disabled 00:05:48.923 EAL: Heap on socket 0 was expanded by 514MB 00:05:48.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.182 EAL: request: mp_malloc_sync 00:05:49.182 EAL: No shared files mode enabled, IPC is disabled 00:05:49.182 EAL: Heap on socket 0 was shrunk by 514MB 00:05:49.182 EAL: Trying to obtain current memory policy. 00:05:49.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.182 EAL: Restoring previous memory policy: 4 00:05:49.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.182 EAL: request: mp_malloc_sync 00:05:49.182 EAL: No shared files mode enabled, IPC is disabled 00:05:49.182 EAL: Heap on socket 0 was expanded by 1026MB 00:05:49.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.440 passed 00:05:49.440 00:05:49.440 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.440 suites 1 1 n/a 0 0 00:05:49.440 tests 2 2 2 0 0 00:05:49.440 asserts 5246 5246 5246 0 n/a 00:05:49.440 00:05:49.440 Elapsed time = 0.985 seconds 00:05:49.440 EAL: request: mp_malloc_sync 00:05:49.440 EAL: No shared files mode enabled, IPC is disabled 00:05:49.440 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:49.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.440 EAL: request: mp_malloc_sync 00:05:49.440 EAL: No shared files mode enabled, IPC is disabled 00:05:49.440 EAL: Heap on socket 0 was shrunk by 2MB 00:05:49.440 EAL: No shared files mode enabled, IPC is disabled 00:05:49.440 EAL: No shared files mode enabled, IPC is disabled 00:05:49.440 EAL: No shared files mode enabled, IPC is disabled 00:05:49.440 ************************************ 00:05:49.440 END TEST env_vtophys 00:05:49.440 ************************************ 00:05:49.440 00:05:49.440 real 0m1.221s 00:05:49.440 user 0m0.503s 00:05:49.440 sys 0m0.585s 00:05:49.440 13:18:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.440 13:18:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:49.440 13:18:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:49.440 13:18:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.440 13:18:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.440 13:18:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.440 ************************************ 00:05:49.440 START TEST env_pci 00:05:49.440 ************************************ 00:05:49.440 13:18:45 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:49.440 00:05:49.440 00:05:49.440 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.440 http://cunit.sourceforge.net/ 00:05:49.440 00:05:49.440 00:05:49.440 Suite: pci 00:05:49.440 Test: pci_hook ...[2024-11-18 13:18:45.526748] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69380 has claimed it 00:05:49.440 passed 00:05:49.440 00:05:49.440 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.440 suites 1 1 n/a 0 0 00:05:49.440 tests 1 1 1 0 0 00:05:49.440 asserts 25 25 25 0 n/a 00:05:49.440 00:05:49.440 Elapsed time = 0.004 seconds 00:05:49.440 EAL: Cannot find device (10000:00:01.0) 00:05:49.440 EAL: Failed to attach device on primary process 00:05:49.440 00:05:49.440 real 0m0.053s 00:05:49.440 user 0m0.025s 00:05:49.440 sys 0m0.027s 00:05:49.440 13:18:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.440 13:18:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:49.440 ************************************ 00:05:49.440 END TEST env_pci 00:05:49.440 ************************************ 00:05:49.698 13:18:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:49.698 13:18:45 env -- env/env.sh@15 -- # uname 00:05:49.698 13:18:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:49.698 13:18:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:49.698 13:18:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:49.698 13:18:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:49.698 13:18:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.698 13:18:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.698 ************************************ 00:05:49.698 START TEST env_dpdk_post_init 00:05:49.698 ************************************ 00:05:49.698 13:18:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:49.698 EAL: Detected CPU lcores: 10 00:05:49.698 EAL: Detected NUMA nodes: 1 00:05:49.698 EAL: Detected shared linkage of DPDK 00:05:49.698 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.698 EAL: Selected IOVA mode 'PA' 00:05:49.698 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.698 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:49.698 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:49.698 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:49.698 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:49.698 Starting DPDK initialization... 00:05:49.698 Starting SPDK post initialization... 00:05:49.698 SPDK NVMe probe 00:05:49.698 Attaching to 0000:00:10.0 00:05:49.698 Attaching to 0000:00:11.0 00:05:49.698 Attaching to 0000:00:12.0 00:05:49.698 Attaching to 0000:00:13.0 00:05:49.698 Attached to 0000:00:10.0 00:05:49.698 Attached to 0000:00:11.0 00:05:49.698 Attached to 0000:00:13.0 00:05:49.698 Attached to 0000:00:12.0 00:05:49.698 Cleaning up... 00:05:49.957 ************************************ 00:05:49.957 END TEST env_dpdk_post_init 00:05:49.957 ************************************ 00:05:49.957 00:05:49.957 real 0m0.226s 00:05:49.957 user 0m0.072s 00:05:49.957 sys 0m0.056s 00:05:49.957 13:18:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.957 13:18:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.957 13:18:45 env -- env/env.sh@26 -- # uname 00:05:49.957 13:18:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:49.957 13:18:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.957 13:18:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.957 13:18:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.957 13:18:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.957 ************************************ 00:05:49.957 START TEST env_mem_callbacks 00:05:49.957 ************************************ 00:05:49.957 13:18:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.957 EAL: Detected CPU lcores: 10 00:05:49.957 EAL: Detected NUMA nodes: 1 00:05:49.957 EAL: Detected shared linkage of DPDK 00:05:49.957 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.957 EAL: Selected IOVA mode 'PA' 00:05:49.957 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.957 00:05:49.957 00:05:49.957 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.957 http://cunit.sourceforge.net/ 00:05:49.957 00:05:49.957 00:05:49.957 Suite: memory 00:05:49.957 Test: test ... 00:05:49.957 register 0x200000200000 2097152 00:05:49.957 malloc 3145728 00:05:49.957 register 0x200000400000 4194304 00:05:49.957 buf 0x200000500000 len 3145728 PASSED 00:05:49.957 malloc 64 00:05:49.957 buf 0x2000004fff40 len 64 PASSED 00:05:49.957 malloc 4194304 00:05:49.957 register 0x200000800000 6291456 00:05:49.957 buf 0x200000a00000 len 4194304 PASSED 00:05:49.957 free 0x200000500000 3145728 00:05:49.957 free 0x2000004fff40 64 00:05:49.957 unregister 0x200000400000 4194304 PASSED 00:05:49.957 free 0x200000a00000 4194304 00:05:49.957 unregister 0x200000800000 6291456 PASSED 00:05:49.957 malloc 8388608 00:05:49.957 register 0x200000400000 10485760 00:05:49.957 buf 0x200000600000 len 8388608 PASSED 00:05:49.957 free 0x200000600000 8388608 00:05:49.957 unregister 0x200000400000 10485760 PASSED 00:05:49.957 passed 00:05:49.957 00:05:49.957 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.957 suites 1 1 n/a 0 0 00:05:49.957 tests 1 1 1 0 0 00:05:49.957 asserts 15 15 15 0 n/a 00:05:49.957 00:05:49.957 Elapsed time = 0.009 seconds 00:05:49.957 00:05:49.957 real 0m0.169s 00:05:49.957 user 0m0.021s 00:05:49.957 sys 0m0.046s 00:05:49.957 13:18:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.957 13:18:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:49.957 ************************************ 00:05:49.957 END TEST env_mem_callbacks 00:05:49.957 ************************************ 00:05:49.957 00:05:49.957 real 0m2.293s 00:05:49.957 user 0m1.000s 00:05:49.957 sys 0m0.938s 00:05:49.957 13:18:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.957 13:18:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.957 ************************************ 00:05:49.957 END TEST env 00:05:49.957 ************************************ 00:05:50.215 13:18:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:50.215 13:18:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.215 13:18:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.215 13:18:46 -- common/autotest_common.sh@10 -- # set +x 00:05:50.215 ************************************ 00:05:50.215 START TEST rpc 00:05:50.215 ************************************ 00:05:50.215 13:18:46 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:50.215 * Looking for test storage... 00:05:50.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.215 13:18:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.215 13:18:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.215 13:18:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.215 13:18:46 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.215 13:18:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.215 13:18:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.215 13:18:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.215 13:18:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.215 13:18:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.215 13:18:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.215 13:18:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.215 13:18:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.215 13:18:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.215 13:18:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.215 13:18:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.215 13:18:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.215 13:18:46 rpc -- scripts/common.sh@345 -- # : 1 00:05:50.215 13:18:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.215 13:18:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.215 13:18:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.215 13:18:46 rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.215 13:18:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.215 13:18:46 rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.215 13:18:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.215 13:18:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.215 13:18:46 rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.215 13:18:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.215 13:18:46 rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.216 13:18:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.216 13:18:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.216 13:18:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.216 13:18:46 rpc -- scripts/common.sh@368 -- # return 0 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 13:18:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69507 00:05:50.216 13:18:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.216 13:18:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69507 00:05:50.216 13:18:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 69507 ']' 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.216 13:18:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.216 [2024-11-18 13:18:46.328956] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:05:50.216 [2024-11-18 13:18:46.329264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69507 ] 00:05:50.474 [2024-11-18 13:18:46.485440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.474 [2024-11-18 13:18:46.504118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:50.474 [2024-11-18 13:18:46.504326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69507' to capture a snapshot of events at runtime. 00:05:50.474 [2024-11-18 13:18:46.504346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.474 [2024-11-18 13:18:46.504354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.474 [2024-11-18 13:18:46.504363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69507 for offline analysis/debug. 00:05:50.474 [2024-11-18 13:18:46.504669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.042 13:18:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.042 13:18:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.042 13:18:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.042 13:18:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.042 13:18:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:51.042 13:18:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:51.042 13:18:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.042 13:18:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.042 13:18:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.042 ************************************ 00:05:51.042 START TEST rpc_integrity 00:05:51.042 ************************************ 00:05:51.042 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:51.042 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.042 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.042 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.042 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.042 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.042 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.302 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.302 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.302 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.302 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.302 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.302 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.303 { 00:05:51.303 "name": "Malloc0", 00:05:51.303 "aliases": [ 00:05:51.303 "9e11a543-ca54-48b6-9feb-c9e253f6246e" 00:05:51.303 ], 00:05:51.303 "product_name": "Malloc disk", 00:05:51.303 "block_size": 512, 00:05:51.303 "num_blocks": 16384, 00:05:51.303 "uuid": "9e11a543-ca54-48b6-9feb-c9e253f6246e", 00:05:51.303 "assigned_rate_limits": { 00:05:51.303 "rw_ios_per_sec": 0, 00:05:51.303 "rw_mbytes_per_sec": 0, 00:05:51.303 "r_mbytes_per_sec": 0, 00:05:51.303 "w_mbytes_per_sec": 0 00:05:51.303 }, 00:05:51.303 "claimed": false, 00:05:51.303 "zoned": false, 00:05:51.303 "supported_io_types": { 00:05:51.303 "read": true, 00:05:51.303 "write": true, 00:05:51.303 "unmap": true, 00:05:51.303 "flush": true, 00:05:51.303 "reset": true, 00:05:51.303 "nvme_admin": false, 00:05:51.303 "nvme_io": false, 00:05:51.303 "nvme_io_md": false, 00:05:51.303 "write_zeroes": true, 00:05:51.303 "zcopy": true, 00:05:51.303 "get_zone_info": false, 00:05:51.303 "zone_management": false, 00:05:51.303 "zone_append": false, 00:05:51.303 "compare": false, 00:05:51.303 "compare_and_write": false, 00:05:51.303 "abort": true, 00:05:51.303 "seek_hole": false, 00:05:51.303 "seek_data": false, 00:05:51.303 "copy": true, 00:05:51.303 "nvme_iov_md": false 00:05:51.303 }, 00:05:51.303 "memory_domains": [ 00:05:51.303 { 00:05:51.303 "dma_device_id": "system", 00:05:51.303 "dma_device_type": 1 00:05:51.303 }, 00:05:51.303 { 00:05:51.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.303 "dma_device_type": 2 00:05:51.303 } 00:05:51.303 ], 00:05:51.303 "driver_specific": {} 00:05:51.303 } 00:05:51.303 ]' 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 [2024-11-18 13:18:47.235658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:51.303 [2024-11-18 13:18:47.235720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.303 [2024-11-18 13:18:47.235752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:51.303 [2024-11-18 13:18:47.235763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.303 [2024-11-18 13:18:47.238050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.303 [2024-11-18 13:18:47.238089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.303 Passthru0 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.303 { 00:05:51.303 "name": "Malloc0", 00:05:51.303 "aliases": [ 00:05:51.303 "9e11a543-ca54-48b6-9feb-c9e253f6246e" 00:05:51.303 ], 00:05:51.303 "product_name": "Malloc disk", 00:05:51.303 "block_size": 512, 00:05:51.303 "num_blocks": 16384, 00:05:51.303 "uuid": "9e11a543-ca54-48b6-9feb-c9e253f6246e", 00:05:51.303 "assigned_rate_limits": { 00:05:51.303 "rw_ios_per_sec": 0, 00:05:51.303 "rw_mbytes_per_sec": 0, 00:05:51.303 "r_mbytes_per_sec": 0, 00:05:51.303 "w_mbytes_per_sec": 0 00:05:51.303 }, 00:05:51.303 "claimed": true, 00:05:51.303 "claim_type": "exclusive_write", 00:05:51.303 "zoned": false, 00:05:51.303 "supported_io_types": { 00:05:51.303 "read": true, 00:05:51.303 "write": true, 00:05:51.303 "unmap": true, 00:05:51.303 "flush": true, 00:05:51.303 "reset": true, 00:05:51.303 "nvme_admin": false, 00:05:51.303 "nvme_io": false, 00:05:51.303 "nvme_io_md": false, 00:05:51.303 "write_zeroes": true, 00:05:51.303 "zcopy": true, 00:05:51.303 "get_zone_info": false, 00:05:51.303 "zone_management": false, 00:05:51.303 "zone_append": false, 00:05:51.303 "compare": false, 00:05:51.303 "compare_and_write": false, 00:05:51.303 "abort": true, 00:05:51.303 "seek_hole": false, 00:05:51.303 "seek_data": false, 00:05:51.303 "copy": true, 00:05:51.303 "nvme_iov_md": false 00:05:51.303 }, 00:05:51.303 "memory_domains": [ 00:05:51.303 { 00:05:51.303 "dma_device_id": "system", 00:05:51.303 "dma_device_type": 1 00:05:51.303 }, 00:05:51.303 { 00:05:51.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.303 "dma_device_type": 2 00:05:51.303 } 00:05:51.303 ], 00:05:51.303 "driver_specific": {} 00:05:51.303 }, 00:05:51.303 { 00:05:51.303 "name": "Passthru0", 00:05:51.303 "aliases": [ 00:05:51.303 "0eb577f7-7f6b-58ce-b9bd-761dabea6fcd" 00:05:51.303 ], 00:05:51.303 "product_name": "passthru", 00:05:51.303 "block_size": 512, 00:05:51.303 "num_blocks": 16384, 00:05:51.303 "uuid": "0eb577f7-7f6b-58ce-b9bd-761dabea6fcd", 00:05:51.303 "assigned_rate_limits": { 00:05:51.303 "rw_ios_per_sec": 0, 00:05:51.303 "rw_mbytes_per_sec": 0, 00:05:51.303 "r_mbytes_per_sec": 0, 00:05:51.303 "w_mbytes_per_sec": 0 00:05:51.303 }, 00:05:51.303 "claimed": false, 00:05:51.303 "zoned": false, 00:05:51.303 "supported_io_types": { 00:05:51.303 "read": true, 00:05:51.303 "write": true, 00:05:51.303 "unmap": true, 00:05:51.303 "flush": true, 00:05:51.303 "reset": true, 00:05:51.303 "nvme_admin": false, 00:05:51.303 "nvme_io": false, 00:05:51.303 "nvme_io_md": false, 00:05:51.303 "write_zeroes": true, 00:05:51.303 "zcopy": true, 00:05:51.303 "get_zone_info": false, 00:05:51.303 "zone_management": false, 00:05:51.303 "zone_append": false, 00:05:51.303 "compare": false, 00:05:51.303 "compare_and_write": false, 00:05:51.303 "abort": true, 00:05:51.303 "seek_hole": false, 00:05:51.303 "seek_data": false, 00:05:51.303 "copy": true, 00:05:51.303 "nvme_iov_md": false 00:05:51.303 }, 00:05:51.303 "memory_domains": [ 00:05:51.303 { 00:05:51.303 "dma_device_id": "system", 00:05:51.303 "dma_device_type": 1 00:05:51.303 }, 00:05:51.303 { 00:05:51.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.303 "dma_device_type": 2 00:05:51.303 } 00:05:51.303 ], 00:05:51.303 "driver_specific": { 00:05:51.303 "passthru": { 00:05:51.303 "name": "Passthru0", 00:05:51.303 "base_bdev_name": "Malloc0" 00:05:51.303 } 00:05:51.303 } 00:05:51.303 } 00:05:51.303 ]' 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.303 ************************************ 00:05:51.303 END TEST rpc_integrity 00:05:51.303 ************************************ 00:05:51.303 13:18:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.303 00:05:51.303 real 0m0.230s 00:05:51.303 user 0m0.133s 00:05:51.303 sys 0m0.033s 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.303 13:18:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 13:18:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:51.303 13:18:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.303 13:18:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.303 13:18:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 ************************************ 00:05:51.303 START TEST rpc_plugins 00:05:51.303 ************************************ 00:05:51.303 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:51.303 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:51.304 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.304 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.304 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.304 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:51.304 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:51.304 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.304 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.304 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.304 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:51.304 { 00:05:51.304 "name": "Malloc1", 00:05:51.304 "aliases": [ 00:05:51.304 "7b2f2e77-b24b-405b-ad8e-94a0582e0565" 00:05:51.304 ], 00:05:51.304 "product_name": "Malloc disk", 00:05:51.304 "block_size": 4096, 00:05:51.304 "num_blocks": 256, 00:05:51.304 "uuid": "7b2f2e77-b24b-405b-ad8e-94a0582e0565", 00:05:51.304 "assigned_rate_limits": { 00:05:51.304 "rw_ios_per_sec": 0, 00:05:51.304 "rw_mbytes_per_sec": 0, 00:05:51.304 "r_mbytes_per_sec": 0, 00:05:51.304 "w_mbytes_per_sec": 0 00:05:51.304 }, 00:05:51.304 "claimed": false, 00:05:51.304 "zoned": false, 00:05:51.304 "supported_io_types": { 00:05:51.304 "read": true, 00:05:51.304 "write": true, 00:05:51.304 "unmap": true, 00:05:51.304 "flush": true, 00:05:51.304 "reset": true, 00:05:51.304 "nvme_admin": false, 00:05:51.304 "nvme_io": false, 00:05:51.304 "nvme_io_md": false, 00:05:51.304 "write_zeroes": true, 00:05:51.304 "zcopy": true, 00:05:51.304 "get_zone_info": false, 00:05:51.304 "zone_management": false, 00:05:51.304 "zone_append": false, 00:05:51.304 "compare": false, 00:05:51.304 "compare_and_write": false, 00:05:51.304 "abort": true, 00:05:51.304 "seek_hole": false, 00:05:51.304 "seek_data": false, 00:05:51.304 "copy": true, 00:05:51.304 "nvme_iov_md": false 00:05:51.304 }, 00:05:51.304 "memory_domains": [ 00:05:51.304 { 00:05:51.304 "dma_device_id": "system", 00:05:51.304 "dma_device_type": 1 00:05:51.304 }, 00:05:51.304 { 00:05:51.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.304 "dma_device_type": 2 00:05:51.304 } 00:05:51.304 ], 00:05:51.304 "driver_specific": {} 00:05:51.304 } 00:05:51.304 ]' 00:05:51.304 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:51.563 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:51.563 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.563 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.563 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:51.563 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:51.563 ************************************ 00:05:51.563 END TEST rpc_plugins 00:05:51.563 ************************************ 00:05:51.563 13:18:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:51.563 00:05:51.563 real 0m0.114s 00:05:51.563 user 0m0.058s 00:05:51.563 sys 0m0.017s 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.563 13:18:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 13:18:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:51.563 13:18:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.563 13:18:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.563 13:18:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 ************************************ 00:05:51.563 START TEST rpc_trace_cmd_test 00:05:51.563 ************************************ 00:05:51.563 13:18:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:51.563 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:51.563 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:51.563 13:18:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.563 13:18:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 13:18:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.563 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:51.563 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69507", 00:05:51.563 "tpoint_group_mask": "0x8", 00:05:51.563 "iscsi_conn": { 00:05:51.563 "mask": "0x2", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "scsi": { 00:05:51.563 "mask": "0x4", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "bdev": { 00:05:51.563 "mask": "0x8", 00:05:51.563 "tpoint_mask": "0xffffffffffffffff" 00:05:51.563 }, 00:05:51.563 "nvmf_rdma": { 00:05:51.563 "mask": "0x10", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "nvmf_tcp": { 00:05:51.563 "mask": "0x20", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "ftl": { 00:05:51.563 "mask": "0x40", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "blobfs": { 00:05:51.563 "mask": "0x80", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "dsa": { 00:05:51.563 "mask": "0x200", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "thread": { 00:05:51.563 "mask": "0x400", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "nvme_pcie": { 00:05:51.563 "mask": "0x800", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "iaa": { 00:05:51.563 "mask": "0x1000", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "nvme_tcp": { 00:05:51.563 "mask": "0x2000", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.563 }, 00:05:51.563 "bdev_nvme": { 00:05:51.563 "mask": "0x4000", 00:05:51.563 "tpoint_mask": "0x0" 00:05:51.564 }, 00:05:51.564 "sock": { 00:05:51.564 "mask": "0x8000", 00:05:51.564 "tpoint_mask": "0x0" 00:05:51.564 }, 00:05:51.564 "blob": { 00:05:51.564 "mask": "0x10000", 00:05:51.564 "tpoint_mask": "0x0" 00:05:51.564 }, 00:05:51.564 "bdev_raid": { 00:05:51.564 "mask": "0x20000", 00:05:51.564 "tpoint_mask": "0x0" 00:05:51.564 }, 00:05:51.564 "scheduler": { 00:05:51.564 "mask": "0x40000", 00:05:51.564 "tpoint_mask": "0x0" 00:05:51.564 } 00:05:51.564 }' 00:05:51.564 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:51.564 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:51.564 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:51.564 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:51.564 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:51.564 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:51.564 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:51.823 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:51.823 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:51.823 ************************************ 00:05:51.823 END TEST rpc_trace_cmd_test 00:05:51.823 ************************************ 00:05:51.823 13:18:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:51.823 00:05:51.823 real 0m0.167s 00:05:51.823 user 0m0.130s 00:05:51.823 sys 0m0.028s 00:05:51.823 13:18:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.823 13:18:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 13:18:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:51.823 13:18:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:51.823 13:18:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:51.823 13:18:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.823 13:18:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.823 13:18:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 ************************************ 00:05:51.823 START TEST rpc_daemon_integrity 00:05:51.823 ************************************ 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.823 { 00:05:51.823 "name": "Malloc2", 00:05:51.823 "aliases": [ 00:05:51.823 "dc57a57e-b73b-4833-bb05-9761ebe1408a" 00:05:51.823 ], 00:05:51.823 "product_name": "Malloc disk", 00:05:51.823 "block_size": 512, 00:05:51.823 "num_blocks": 16384, 00:05:51.823 "uuid": "dc57a57e-b73b-4833-bb05-9761ebe1408a", 00:05:51.823 "assigned_rate_limits": { 00:05:51.823 "rw_ios_per_sec": 0, 00:05:51.823 "rw_mbytes_per_sec": 0, 00:05:51.823 "r_mbytes_per_sec": 0, 00:05:51.823 "w_mbytes_per_sec": 0 00:05:51.823 }, 00:05:51.823 "claimed": false, 00:05:51.823 "zoned": false, 00:05:51.823 "supported_io_types": { 00:05:51.823 "read": true, 00:05:51.823 "write": true, 00:05:51.823 "unmap": true, 00:05:51.823 "flush": true, 00:05:51.823 "reset": true, 00:05:51.823 "nvme_admin": false, 00:05:51.823 "nvme_io": false, 00:05:51.823 "nvme_io_md": false, 00:05:51.823 "write_zeroes": true, 00:05:51.823 "zcopy": true, 00:05:51.823 "get_zone_info": false, 00:05:51.823 "zone_management": false, 00:05:51.823 "zone_append": false, 00:05:51.823 "compare": false, 00:05:51.823 "compare_and_write": false, 00:05:51.823 "abort": true, 00:05:51.823 "seek_hole": false, 00:05:51.823 "seek_data": false, 00:05:51.823 "copy": true, 00:05:51.823 "nvme_iov_md": false 00:05:51.823 }, 00:05:51.823 "memory_domains": [ 00:05:51.823 { 00:05:51.823 "dma_device_id": "system", 00:05:51.823 "dma_device_type": 1 00:05:51.823 }, 00:05:51.823 { 00:05:51.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.823 "dma_device_type": 2 00:05:51.823 } 00:05:51.823 ], 00:05:51.823 "driver_specific": {} 00:05:51.823 } 00:05:51.823 ]' 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.823 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.824 [2024-11-18 13:18:47.888564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.824 [2024-11-18 13:18:47.888620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.824 [2024-11-18 13:18:47.888647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:51.824 [2024-11-18 13:18:47.888656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.824 [2024-11-18 13:18:47.890831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.824 [2024-11-18 13:18:47.890864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.824 Passthru0 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.824 { 00:05:51.824 "name": "Malloc2", 00:05:51.824 "aliases": [ 00:05:51.824 "dc57a57e-b73b-4833-bb05-9761ebe1408a" 00:05:51.824 ], 00:05:51.824 "product_name": "Malloc disk", 00:05:51.824 "block_size": 512, 00:05:51.824 "num_blocks": 16384, 00:05:51.824 "uuid": "dc57a57e-b73b-4833-bb05-9761ebe1408a", 00:05:51.824 "assigned_rate_limits": { 00:05:51.824 "rw_ios_per_sec": 0, 00:05:51.824 "rw_mbytes_per_sec": 0, 00:05:51.824 "r_mbytes_per_sec": 0, 00:05:51.824 "w_mbytes_per_sec": 0 00:05:51.824 }, 00:05:51.824 "claimed": true, 00:05:51.824 "claim_type": "exclusive_write", 00:05:51.824 "zoned": false, 00:05:51.824 "supported_io_types": { 00:05:51.824 "read": true, 00:05:51.824 "write": true, 00:05:51.824 "unmap": true, 00:05:51.824 "flush": true, 00:05:51.824 "reset": true, 00:05:51.824 "nvme_admin": false, 00:05:51.824 "nvme_io": false, 00:05:51.824 "nvme_io_md": false, 00:05:51.824 "write_zeroes": true, 00:05:51.824 "zcopy": true, 00:05:51.824 "get_zone_info": false, 00:05:51.824 "zone_management": false, 00:05:51.824 "zone_append": false, 00:05:51.824 "compare": false, 00:05:51.824 "compare_and_write": false, 00:05:51.824 "abort": true, 00:05:51.824 "seek_hole": false, 00:05:51.824 "seek_data": false, 00:05:51.824 "copy": true, 00:05:51.824 "nvme_iov_md": false 00:05:51.824 }, 00:05:51.824 "memory_domains": [ 00:05:51.824 { 00:05:51.824 "dma_device_id": "system", 00:05:51.824 "dma_device_type": 1 00:05:51.824 }, 00:05:51.824 { 00:05:51.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.824 "dma_device_type": 2 00:05:51.824 } 00:05:51.824 ], 00:05:51.824 "driver_specific": {} 00:05:51.824 }, 00:05:51.824 { 00:05:51.824 "name": "Passthru0", 00:05:51.824 "aliases": [ 00:05:51.824 "c211964e-46f6-5b72-9a2f-a7c77c3bbb70" 00:05:51.824 ], 00:05:51.824 "product_name": "passthru", 00:05:51.824 "block_size": 512, 00:05:51.824 "num_blocks": 16384, 00:05:51.824 "uuid": "c211964e-46f6-5b72-9a2f-a7c77c3bbb70", 00:05:51.824 "assigned_rate_limits": { 00:05:51.824 "rw_ios_per_sec": 0, 00:05:51.824 "rw_mbytes_per_sec": 0, 00:05:51.824 "r_mbytes_per_sec": 0, 00:05:51.824 "w_mbytes_per_sec": 0 00:05:51.824 }, 00:05:51.824 "claimed": false, 00:05:51.824 "zoned": false, 00:05:51.824 "supported_io_types": { 00:05:51.824 "read": true, 00:05:51.824 "write": true, 00:05:51.824 "unmap": true, 00:05:51.824 "flush": true, 00:05:51.824 "reset": true, 00:05:51.824 "nvme_admin": false, 00:05:51.824 "nvme_io": false, 00:05:51.824 "nvme_io_md": false, 00:05:51.824 "write_zeroes": true, 00:05:51.824 "zcopy": true, 00:05:51.824 "get_zone_info": false, 00:05:51.824 "zone_management": false, 00:05:51.824 "zone_append": false, 00:05:51.824 "compare": false, 00:05:51.824 "compare_and_write": false, 00:05:51.824 "abort": true, 00:05:51.824 "seek_hole": false, 00:05:51.824 "seek_data": false, 00:05:51.824 "copy": true, 00:05:51.824 "nvme_iov_md": false 00:05:51.824 }, 00:05:51.824 "memory_domains": [ 00:05:51.824 { 00:05:51.824 "dma_device_id": "system", 00:05:51.824 "dma_device_type": 1 00:05:51.824 }, 00:05:51.824 { 00:05:51.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.824 "dma_device_type": 2 00:05:51.824 } 00:05:51.824 ], 00:05:51.824 "driver_specific": { 00:05:51.824 "passthru": { 00:05:51.824 "name": "Passthru0", 00:05:51.824 "base_bdev_name": "Malloc2" 00:05:51.824 } 00:05:51.824 } 00:05:51.824 } 00:05:51.824 ]' 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.824 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:52.083 13:18:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:52.083 13:18:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:52.083 00:05:52.083 real 0m0.226s 00:05:52.083 user 0m0.131s 00:05:52.083 sys 0m0.029s 00:05:52.083 13:18:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.083 13:18:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.083 ************************************ 00:05:52.083 END TEST rpc_daemon_integrity 00:05:52.083 ************************************ 00:05:52.083 13:18:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:52.083 13:18:48 rpc -- rpc/rpc.sh@84 -- # killprocess 69507 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 69507 ']' 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@958 -- # kill -0 69507 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69507 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.083 killing process with pid 69507 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69507' 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@973 -- # kill 69507 00:05:52.083 13:18:48 rpc -- common/autotest_common.sh@978 -- # wait 69507 00:05:52.342 00:05:52.342 real 0m2.201s 00:05:52.342 user 0m2.613s 00:05:52.342 sys 0m0.566s 00:05:52.342 13:18:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.342 13:18:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.342 ************************************ 00:05:52.342 END TEST rpc 00:05:52.342 ************************************ 00:05:52.342 13:18:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.342 13:18:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.342 13:18:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.342 13:18:48 -- common/autotest_common.sh@10 -- # set +x 00:05:52.342 ************************************ 00:05:52.342 START TEST skip_rpc 00:05:52.342 ************************************ 00:05:52.342 13:18:48 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.342 * Looking for test storage... 00:05:52.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.342 13:18:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.342 13:18:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.342 13:18:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.600 13:18:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.600 13:18:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.600 13:18:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.601 13:18:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.601 --rc genhtml_branch_coverage=1 00:05:52.601 --rc genhtml_function_coverage=1 00:05:52.601 --rc genhtml_legend=1 00:05:52.601 --rc geninfo_all_blocks=1 00:05:52.601 --rc geninfo_unexecuted_blocks=1 00:05:52.601 00:05:52.601 ' 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.601 --rc genhtml_branch_coverage=1 00:05:52.601 --rc genhtml_function_coverage=1 00:05:52.601 --rc genhtml_legend=1 00:05:52.601 --rc geninfo_all_blocks=1 00:05:52.601 --rc geninfo_unexecuted_blocks=1 00:05:52.601 00:05:52.601 ' 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.601 --rc genhtml_branch_coverage=1 00:05:52.601 --rc genhtml_function_coverage=1 00:05:52.601 --rc genhtml_legend=1 00:05:52.601 --rc geninfo_all_blocks=1 00:05:52.601 --rc geninfo_unexecuted_blocks=1 00:05:52.601 00:05:52.601 ' 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.601 --rc genhtml_branch_coverage=1 00:05:52.601 --rc genhtml_function_coverage=1 00:05:52.601 --rc genhtml_legend=1 00:05:52.601 --rc geninfo_all_blocks=1 00:05:52.601 --rc geninfo_unexecuted_blocks=1 00:05:52.601 00:05:52.601 ' 00:05:52.601 13:18:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.601 13:18:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:52.601 13:18:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.601 13:18:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.601 ************************************ 00:05:52.601 START TEST skip_rpc 00:05:52.601 ************************************ 00:05:52.601 13:18:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:52.601 13:18:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69703 00:05:52.601 13:18:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:52.601 13:18:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.601 13:18:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:52.601 [2024-11-18 13:18:48.565733] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:05:52.601 [2024-11-18 13:18:48.565842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69703 ] 00:05:52.601 [2024-11-18 13:18:48.721504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.858 [2024-11-18 13:18:48.741558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69703 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69703 ']' 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69703 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69703 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.125 killing process with pid 69703 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69703' 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69703 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69703 00:05:58.125 00:05:58.125 real 0m5.268s 00:05:58.125 user 0m4.934s 00:05:58.125 sys 0m0.239s 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.125 13:18:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.125 ************************************ 00:05:58.125 END TEST skip_rpc 00:05:58.125 ************************************ 00:05:58.125 13:18:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:58.125 13:18:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.125 13:18:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.125 13:18:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.125 ************************************ 00:05:58.125 START TEST skip_rpc_with_json 00:05:58.125 ************************************ 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69791 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69791 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69791 ']' 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.125 13:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.126 13:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.126 13:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.126 13:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.126 [2024-11-18 13:18:53.897507] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:05:58.126 [2024-11-18 13:18:53.897623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69791 ] 00:05:58.126 [2024-11-18 13:18:54.051437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.126 [2024-11-18 13:18:54.069720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.689 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.689 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:58.689 13:18:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:58.689 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.689 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.689 [2024-11-18 13:18:54.699623] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:58.689 request: 00:05:58.689 { 00:05:58.689 "trtype": "tcp", 00:05:58.689 "method": "nvmf_get_transports", 00:05:58.689 "req_id": 1 00:05:58.689 } 00:05:58.689 Got JSON-RPC error response 00:05:58.689 response: 00:05:58.689 { 00:05:58.689 "code": -19, 00:05:58.689 "message": "No such device" 00:05:58.689 } 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.690 [2024-11-18 13:18:54.707699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.690 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.948 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.948 13:18:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.948 { 00:05:58.948 "subsystems": [ 00:05:58.948 { 00:05:58.948 "subsystem": "fsdev", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "fsdev_set_opts", 00:05:58.948 "params": { 00:05:58.948 "fsdev_io_pool_size": 65535, 00:05:58.948 "fsdev_io_cache_size": 256 00:05:58.948 } 00:05:58.948 } 00:05:58.948 ] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "keyring", 00:05:58.948 "config": [] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "iobuf", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "iobuf_set_options", 00:05:58.948 "params": { 00:05:58.948 "small_pool_count": 8192, 00:05:58.948 "large_pool_count": 1024, 00:05:58.948 "small_bufsize": 8192, 00:05:58.948 "large_bufsize": 135168, 00:05:58.948 "enable_numa": false 00:05:58.948 } 00:05:58.948 } 00:05:58.948 ] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "sock", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "sock_set_default_impl", 00:05:58.948 "params": { 00:05:58.948 "impl_name": "posix" 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "sock_impl_set_options", 00:05:58.948 "params": { 00:05:58.948 "impl_name": "ssl", 00:05:58.948 "recv_buf_size": 4096, 00:05:58.948 "send_buf_size": 4096, 00:05:58.948 "enable_recv_pipe": true, 00:05:58.948 "enable_quickack": false, 00:05:58.948 "enable_placement_id": 0, 00:05:58.948 "enable_zerocopy_send_server": true, 00:05:58.948 "enable_zerocopy_send_client": false, 00:05:58.948 "zerocopy_threshold": 0, 00:05:58.948 "tls_version": 0, 00:05:58.948 "enable_ktls": false 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "sock_impl_set_options", 00:05:58.948 "params": { 00:05:58.948 "impl_name": "posix", 00:05:58.948 "recv_buf_size": 2097152, 00:05:58.948 "send_buf_size": 2097152, 00:05:58.948 "enable_recv_pipe": true, 00:05:58.948 "enable_quickack": false, 00:05:58.948 "enable_placement_id": 0, 00:05:58.948 "enable_zerocopy_send_server": true, 00:05:58.948 "enable_zerocopy_send_client": false, 00:05:58.948 "zerocopy_threshold": 0, 00:05:58.948 "tls_version": 0, 00:05:58.948 "enable_ktls": false 00:05:58.948 } 00:05:58.948 } 00:05:58.948 ] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "vmd", 00:05:58.948 "config": [] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "accel", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "accel_set_options", 00:05:58.948 "params": { 00:05:58.948 "small_cache_size": 128, 00:05:58.948 "large_cache_size": 16, 00:05:58.948 "task_count": 2048, 00:05:58.948 "sequence_count": 2048, 00:05:58.948 "buf_count": 2048 00:05:58.948 } 00:05:58.948 } 00:05:58.948 ] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "bdev", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "bdev_set_options", 00:05:58.948 "params": { 00:05:58.948 "bdev_io_pool_size": 65535, 00:05:58.948 "bdev_io_cache_size": 256, 00:05:58.948 "bdev_auto_examine": true, 00:05:58.948 "iobuf_small_cache_size": 128, 00:05:58.948 "iobuf_large_cache_size": 16 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "bdev_raid_set_options", 00:05:58.948 "params": { 00:05:58.948 "process_window_size_kb": 1024, 00:05:58.948 "process_max_bandwidth_mb_sec": 0 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "bdev_iscsi_set_options", 00:05:58.948 "params": { 00:05:58.948 "timeout_sec": 30 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "bdev_nvme_set_options", 00:05:58.948 "params": { 00:05:58.948 "action_on_timeout": "none", 00:05:58.948 "timeout_us": 0, 00:05:58.948 "timeout_admin_us": 0, 00:05:58.948 "keep_alive_timeout_ms": 10000, 00:05:58.948 "arbitration_burst": 0, 00:05:58.948 "low_priority_weight": 0, 00:05:58.948 "medium_priority_weight": 0, 00:05:58.948 "high_priority_weight": 0, 00:05:58.948 "nvme_adminq_poll_period_us": 10000, 00:05:58.948 "nvme_ioq_poll_period_us": 0, 00:05:58.948 "io_queue_requests": 0, 00:05:58.948 "delay_cmd_submit": true, 00:05:58.948 "transport_retry_count": 4, 00:05:58.948 "bdev_retry_count": 3, 00:05:58.948 "transport_ack_timeout": 0, 00:05:58.948 "ctrlr_loss_timeout_sec": 0, 00:05:58.948 "reconnect_delay_sec": 0, 00:05:58.948 "fast_io_fail_timeout_sec": 0, 00:05:58.948 "disable_auto_failback": false, 00:05:58.948 "generate_uuids": false, 00:05:58.948 "transport_tos": 0, 00:05:58.948 "nvme_error_stat": false, 00:05:58.948 "rdma_srq_size": 0, 00:05:58.948 "io_path_stat": false, 00:05:58.948 "allow_accel_sequence": false, 00:05:58.948 "rdma_max_cq_size": 0, 00:05:58.948 "rdma_cm_event_timeout_ms": 0, 00:05:58.948 "dhchap_digests": [ 00:05:58.948 "sha256", 00:05:58.948 "sha384", 00:05:58.948 "sha512" 00:05:58.948 ], 00:05:58.948 "dhchap_dhgroups": [ 00:05:58.948 "null", 00:05:58.948 "ffdhe2048", 00:05:58.948 "ffdhe3072", 00:05:58.948 "ffdhe4096", 00:05:58.948 "ffdhe6144", 00:05:58.948 "ffdhe8192" 00:05:58.948 ] 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "bdev_nvme_set_hotplug", 00:05:58.948 "params": { 00:05:58.948 "period_us": 100000, 00:05:58.948 "enable": false 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "bdev_wait_for_examine" 00:05:58.948 } 00:05:58.948 ] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "scsi", 00:05:58.948 "config": null 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "scheduler", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "framework_set_scheduler", 00:05:58.948 "params": { 00:05:58.948 "name": "static" 00:05:58.948 } 00:05:58.948 } 00:05:58.948 ] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "vhost_scsi", 00:05:58.948 "config": [] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "vhost_blk", 00:05:58.948 "config": [] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "ublk", 00:05:58.948 "config": [] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "nbd", 00:05:58.948 "config": [] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "nvmf", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "nvmf_set_config", 00:05:58.948 "params": { 00:05:58.948 "discovery_filter": "match_any", 00:05:58.948 "admin_cmd_passthru": { 00:05:58.948 "identify_ctrlr": false 00:05:58.948 }, 00:05:58.948 "dhchap_digests": [ 00:05:58.948 "sha256", 00:05:58.948 "sha384", 00:05:58.948 "sha512" 00:05:58.948 ], 00:05:58.948 "dhchap_dhgroups": [ 00:05:58.948 "null", 00:05:58.948 "ffdhe2048", 00:05:58.948 "ffdhe3072", 00:05:58.948 "ffdhe4096", 00:05:58.948 "ffdhe6144", 00:05:58.948 "ffdhe8192" 00:05:58.948 ] 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "nvmf_set_max_subsystems", 00:05:58.948 "params": { 00:05:58.948 "max_subsystems": 1024 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "nvmf_set_crdt", 00:05:58.948 "params": { 00:05:58.948 "crdt1": 0, 00:05:58.948 "crdt2": 0, 00:05:58.948 "crdt3": 0 00:05:58.948 } 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "method": "nvmf_create_transport", 00:05:58.948 "params": { 00:05:58.948 "trtype": "TCP", 00:05:58.948 "max_queue_depth": 128, 00:05:58.948 "max_io_qpairs_per_ctrlr": 127, 00:05:58.948 "in_capsule_data_size": 4096, 00:05:58.948 "max_io_size": 131072, 00:05:58.948 "io_unit_size": 131072, 00:05:58.948 "max_aq_depth": 128, 00:05:58.948 "num_shared_buffers": 511, 00:05:58.948 "buf_cache_size": 4294967295, 00:05:58.948 "dif_insert_or_strip": false, 00:05:58.948 "zcopy": false, 00:05:58.948 "c2h_success": true, 00:05:58.948 "sock_priority": 0, 00:05:58.948 "abort_timeout_sec": 1, 00:05:58.948 "ack_timeout": 0, 00:05:58.948 "data_wr_pool_size": 0 00:05:58.948 } 00:05:58.948 } 00:05:58.948 ] 00:05:58.948 }, 00:05:58.948 { 00:05:58.948 "subsystem": "iscsi", 00:05:58.948 "config": [ 00:05:58.948 { 00:05:58.948 "method": "iscsi_set_options", 00:05:58.949 "params": { 00:05:58.949 "node_base": "iqn.2016-06.io.spdk", 00:05:58.949 "max_sessions": 128, 00:05:58.949 "max_connections_per_session": 2, 00:05:58.949 "max_queue_depth": 64, 00:05:58.949 "default_time2wait": 2, 00:05:58.949 "default_time2retain": 20, 00:05:58.949 "first_burst_length": 8192, 00:05:58.949 "immediate_data": true, 00:05:58.949 "allow_duplicated_isid": false, 00:05:58.949 "error_recovery_level": 0, 00:05:58.949 "nop_timeout": 60, 00:05:58.949 "nop_in_interval": 30, 00:05:58.949 "disable_chap": false, 00:05:58.949 "require_chap": false, 00:05:58.949 "mutual_chap": false, 00:05:58.949 "chap_group": 0, 00:05:58.949 "max_large_datain_per_connection": 64, 00:05:58.949 "max_r2t_per_connection": 4, 00:05:58.949 "pdu_pool_size": 36864, 00:05:58.949 "immediate_data_pool_size": 16384, 00:05:58.949 "data_out_pool_size": 2048 00:05:58.949 } 00:05:58.949 } 00:05:58.949 ] 00:05:58.949 } 00:05:58.949 ] 00:05:58.949 } 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69791 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69791 ']' 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69791 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69791 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.949 killing process with pid 69791 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69791' 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69791 00:05:58.949 13:18:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69791 00:05:59.207 13:18:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69819 00:05:59.207 13:18:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.207 13:18:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69819 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69819 ']' 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69819 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69819 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.466 killing process with pid 69819 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69819' 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69819 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69819 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.466 00:06:04.466 real 0m6.569s 00:06:04.466 user 0m6.226s 00:06:04.466 sys 0m0.533s 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.466 13:19:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.466 ************************************ 00:06:04.466 END TEST skip_rpc_with_json 00:06:04.466 ************************************ 00:06:04.467 13:19:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:04.467 13:19:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.467 13:19:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.467 13:19:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.467 ************************************ 00:06:04.467 START TEST skip_rpc_with_delay 00:06:04.467 ************************************ 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.467 [2024-11-18 13:19:00.504115] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.467 00:06:04.467 real 0m0.107s 00:06:04.467 user 0m0.056s 00:06:04.467 sys 0m0.050s 00:06:04.467 ************************************ 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.467 13:19:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:04.467 END TEST skip_rpc_with_delay 00:06:04.467 ************************************ 00:06:04.467 13:19:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:04.467 13:19:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:04.467 13:19:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:04.467 13:19:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.467 13:19:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.467 13:19:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.467 ************************************ 00:06:04.467 START TEST exit_on_failed_rpc_init 00:06:04.467 ************************************ 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69925 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69925 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69925 ']' 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.467 13:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.724 [2024-11-18 13:19:00.652656] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:04.724 [2024-11-18 13:19:00.652762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69925 ] 00:06:04.724 [2024-11-18 13:19:00.799325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.724 [2024-11-18 13:19:00.819064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:05.658 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.658 [2024-11-18 13:19:01.585651] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:05.658 [2024-11-18 13:19:01.585771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69943 ] 00:06:05.658 [2024-11-18 13:19:01.741727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.658 [2024-11-18 13:19:01.761949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.658 [2024-11-18 13:19:01.762045] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:05.658 [2024-11-18 13:19:01.762063] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:05.658 [2024-11-18 13:19:01.762074] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69925 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69925 ']' 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69925 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69925 00:06:05.916 killing process with pid 69925 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69925' 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69925 00:06:05.916 13:19:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69925 00:06:06.175 ************************************ 00:06:06.175 END TEST exit_on_failed_rpc_init 00:06:06.175 ************************************ 00:06:06.175 00:06:06.175 real 0m1.529s 00:06:06.175 user 0m1.689s 00:06:06.175 sys 0m0.368s 00:06:06.175 13:19:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.175 13:19:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.175 13:19:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.175 ************************************ 00:06:06.175 END TEST skip_rpc 00:06:06.175 ************************************ 00:06:06.175 00:06:06.175 real 0m13.794s 00:06:06.175 user 0m13.058s 00:06:06.175 sys 0m1.352s 00:06:06.175 13:19:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.175 13:19:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.175 13:19:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.175 13:19:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.175 13:19:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.175 13:19:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.175 ************************************ 00:06:06.175 START TEST rpc_client 00:06:06.175 ************************************ 00:06:06.175 13:19:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.175 * Looking for test storage... 00:06:06.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:06.175 13:19:02 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.175 13:19:02 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.175 13:19:02 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.434 13:19:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.434 13:19:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.434 13:19:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.434 13:19:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.434 13:19:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.434 13:19:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.434 13:19:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.435 13:19:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:06.435 13:19:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.435 13:19:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:06.435 OK 00:06:06.435 13:19:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.435 ************************************ 00:06:06.435 END TEST rpc_client 00:06:06.435 ************************************ 00:06:06.435 00:06:06.435 real 0m0.189s 00:06:06.435 user 0m0.100s 00:06:06.435 sys 0m0.086s 00:06:06.435 13:19:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.435 13:19:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:06.435 13:19:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.435 13:19:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.435 13:19:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.435 13:19:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.435 ************************************ 00:06:06.435 START TEST json_config 00:06:06.435 ************************************ 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.435 13:19:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.435 13:19:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.435 13:19:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.435 13:19:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.435 13:19:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.435 13:19:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.435 13:19:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.435 13:19:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:06.435 13:19:02 json_config -- scripts/common.sh@345 -- # : 1 00:06:06.435 13:19:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.435 13:19:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.435 13:19:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:06.435 13:19:02 json_config -- scripts/common.sh@353 -- # local d=1 00:06:06.435 13:19:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.435 13:19:02 json_config -- scripts/common.sh@355 -- # echo 1 00:06:06.435 13:19:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.435 13:19:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@353 -- # local d=2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.435 13:19:02 json_config -- scripts/common.sh@355 -- # echo 2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.435 13:19:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.435 13:19:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.435 13:19:02 json_config -- scripts/common.sh@368 -- # return 0 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.435 --rc genhtml_branch_coverage=1 00:06:06.435 --rc genhtml_function_coverage=1 00:06:06.435 --rc genhtml_legend=1 00:06:06.435 --rc geninfo_all_blocks=1 00:06:06.435 --rc geninfo_unexecuted_blocks=1 00:06:06.435 00:06:06.435 ' 00:06:06.435 13:19:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.435 13:19:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ddb0366-2ef0-47b4-a531-d667894373d3 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7ddb0366-2ef0-47b4-a531-d667894373d3 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.436 13:19:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.436 13:19:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.436 13:19:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.436 13:19:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.436 13:19:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.436 13:19:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.436 13:19:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.436 13:19:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:06.436 13:19:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@51 -- # : 0 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.436 13:19:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.436 13:19:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.436 13:19:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:06.436 13:19:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:06.436 13:19:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:06.436 13:19:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.436 WARNING: No tests are enabled so not running JSON configuration tests 00:06:06.436 13:19:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:06.436 13:19:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:06.436 00:06:06.436 real 0m0.141s 00:06:06.436 user 0m0.083s 00:06:06.436 sys 0m0.057s 00:06:06.436 13:19:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.436 13:19:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.436 ************************************ 00:06:06.436 END TEST json_config 00:06:06.436 ************************************ 00:06:06.694 13:19:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.695 13:19:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.695 13:19:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.695 13:19:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 ************************************ 00:06:06.695 START TEST json_config_extra_key 00:06:06.695 ************************************ 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 13:19:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ddb0366-2ef0-47b4-a531-d667894373d3 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7ddb0366-2ef0-47b4-a531-d667894373d3 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.695 13:19:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.695 13:19:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.695 13:19:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.695 13:19:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.695 13:19:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:06.695 13:19:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.695 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.695 13:19:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:06.695 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.695 INFO: launching applications... 00:06:06.696 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:06.696 13:19:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70120 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.696 Waiting for target to run... 00:06:06.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70120 /var/tmp/spdk_tgt.sock 00:06:06.696 13:19:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70120 ']' 00:06:06.696 13:19:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.696 13:19:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.696 13:19:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.696 13:19:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.696 13:19:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.696 13:19:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.696 [2024-11-18 13:19:02.810553] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:06.696 [2024-11-18 13:19:02.810813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70120 ] 00:06:07.262 [2024-11-18 13:19:03.133428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.262 [2024-11-18 13:19:03.144111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.521 00:06:07.521 INFO: shutting down applications... 00:06:07.521 13:19:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.521 13:19:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:07.521 13:19:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:07.521 13:19:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70120 ]] 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70120 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70120 00:06:07.521 13:19:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.088 SPDK target shutdown done 00:06:08.088 Success 00:06:08.088 13:19:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.088 13:19:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.088 13:19:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70120 00:06:08.088 13:19:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.088 13:19:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:08.088 13:19:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.088 13:19:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.088 13:19:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:08.088 ************************************ 00:06:08.088 END TEST json_config_extra_key 00:06:08.088 ************************************ 00:06:08.088 00:06:08.088 real 0m1.541s 00:06:08.088 user 0m1.205s 00:06:08.088 sys 0m0.339s 00:06:08.088 13:19:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.088 13:19:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.088 13:19:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:08.088 13:19:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.088 13:19:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.088 13:19:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.088 ************************************ 00:06:08.088 START TEST alias_rpc 00:06:08.088 ************************************ 00:06:08.088 13:19:04 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:08.347 * Looking for test storage... 00:06:08.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.347 13:19:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.347 --rc genhtml_branch_coverage=1 00:06:08.347 --rc genhtml_function_coverage=1 00:06:08.347 --rc genhtml_legend=1 00:06:08.347 --rc geninfo_all_blocks=1 00:06:08.347 --rc geninfo_unexecuted_blocks=1 00:06:08.347 00:06:08.347 ' 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.347 --rc genhtml_branch_coverage=1 00:06:08.347 --rc genhtml_function_coverage=1 00:06:08.347 --rc genhtml_legend=1 00:06:08.347 --rc geninfo_all_blocks=1 00:06:08.347 --rc geninfo_unexecuted_blocks=1 00:06:08.347 00:06:08.347 ' 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.347 --rc genhtml_branch_coverage=1 00:06:08.347 --rc genhtml_function_coverage=1 00:06:08.347 --rc genhtml_legend=1 00:06:08.347 --rc geninfo_all_blocks=1 00:06:08.347 --rc geninfo_unexecuted_blocks=1 00:06:08.347 00:06:08.347 ' 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.347 --rc genhtml_branch_coverage=1 00:06:08.347 --rc genhtml_function_coverage=1 00:06:08.347 --rc genhtml_legend=1 00:06:08.347 --rc geninfo_all_blocks=1 00:06:08.347 --rc geninfo_unexecuted_blocks=1 00:06:08.347 00:06:08.347 ' 00:06:08.347 13:19:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.347 13:19:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70198 00:06:08.347 13:19:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.347 13:19:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70198 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70198 ']' 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.347 13:19:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.347 [2024-11-18 13:19:04.386395] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:08.347 [2024-11-18 13:19:04.386654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70198 ] 00:06:08.605 [2024-11-18 13:19:04.539397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.605 [2024-11-18 13:19:04.557200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.176 13:19:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.176 13:19:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.176 13:19:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:09.433 13:19:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70198 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70198 ']' 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70198 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70198 00:06:09.433 killing process with pid 70198 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70198' 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 70198 00:06:09.433 13:19:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 70198 00:06:09.691 ************************************ 00:06:09.692 END TEST alias_rpc 00:06:09.692 ************************************ 00:06:09.692 00:06:09.692 real 0m1.520s 00:06:09.692 user 0m1.660s 00:06:09.692 sys 0m0.350s 00:06:09.692 13:19:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.692 13:19:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.692 13:19:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:09.692 13:19:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:09.692 13:19:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.692 13:19:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.692 13:19:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.692 ************************************ 00:06:09.692 START TEST spdkcli_tcp 00:06:09.692 ************************************ 00:06:09.692 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:09.692 * Looking for test storage... 00:06:09.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:09.692 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.692 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.692 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.951 13:19:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.951 --rc genhtml_branch_coverage=1 00:06:09.951 --rc genhtml_function_coverage=1 00:06:09.951 --rc genhtml_legend=1 00:06:09.951 --rc geninfo_all_blocks=1 00:06:09.951 --rc geninfo_unexecuted_blocks=1 00:06:09.951 00:06:09.951 ' 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.951 --rc genhtml_branch_coverage=1 00:06:09.951 --rc genhtml_function_coverage=1 00:06:09.951 --rc genhtml_legend=1 00:06:09.951 --rc geninfo_all_blocks=1 00:06:09.951 --rc geninfo_unexecuted_blocks=1 00:06:09.951 00:06:09.951 ' 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.951 --rc genhtml_branch_coverage=1 00:06:09.951 --rc genhtml_function_coverage=1 00:06:09.951 --rc genhtml_legend=1 00:06:09.951 --rc geninfo_all_blocks=1 00:06:09.951 --rc geninfo_unexecuted_blocks=1 00:06:09.951 00:06:09.951 ' 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.951 --rc genhtml_branch_coverage=1 00:06:09.951 --rc genhtml_function_coverage=1 00:06:09.951 --rc genhtml_legend=1 00:06:09.951 --rc geninfo_all_blocks=1 00:06:09.951 --rc geninfo_unexecuted_blocks=1 00:06:09.951 00:06:09.951 ' 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70273 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70273 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70273 ']' 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.951 13:19:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.951 13:19:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.951 [2024-11-18 13:19:05.953939] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:09.951 [2024-11-18 13:19:05.954058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70273 ] 00:06:10.209 [2024-11-18 13:19:06.111083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.209 [2024-11-18 13:19:06.131637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.209 [2024-11-18 13:19:06.131648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.776 13:19:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.776 13:19:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:10.776 13:19:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70290 00:06:10.776 13:19:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:10.776 13:19:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:11.062 [ 00:06:11.062 "bdev_malloc_delete", 00:06:11.062 "bdev_malloc_create", 00:06:11.062 "bdev_null_resize", 00:06:11.062 "bdev_null_delete", 00:06:11.062 "bdev_null_create", 00:06:11.062 "bdev_nvme_cuse_unregister", 00:06:11.062 "bdev_nvme_cuse_register", 00:06:11.062 "bdev_opal_new_user", 00:06:11.062 "bdev_opal_set_lock_state", 00:06:11.062 "bdev_opal_delete", 00:06:11.062 "bdev_opal_get_info", 00:06:11.062 "bdev_opal_create", 00:06:11.062 "bdev_nvme_opal_revert", 00:06:11.062 "bdev_nvme_opal_init", 00:06:11.062 "bdev_nvme_send_cmd", 00:06:11.062 "bdev_nvme_set_keys", 00:06:11.062 "bdev_nvme_get_path_iostat", 00:06:11.062 "bdev_nvme_get_mdns_discovery_info", 00:06:11.062 "bdev_nvme_stop_mdns_discovery", 00:06:11.062 "bdev_nvme_start_mdns_discovery", 00:06:11.062 "bdev_nvme_set_multipath_policy", 00:06:11.062 "bdev_nvme_set_preferred_path", 00:06:11.062 "bdev_nvme_get_io_paths", 00:06:11.062 "bdev_nvme_remove_error_injection", 00:06:11.062 "bdev_nvme_add_error_injection", 00:06:11.062 "bdev_nvme_get_discovery_info", 00:06:11.062 "bdev_nvme_stop_discovery", 00:06:11.062 "bdev_nvme_start_discovery", 00:06:11.062 "bdev_nvme_get_controller_health_info", 00:06:11.062 "bdev_nvme_disable_controller", 00:06:11.062 "bdev_nvme_enable_controller", 00:06:11.062 "bdev_nvme_reset_controller", 00:06:11.062 "bdev_nvme_get_transport_statistics", 00:06:11.062 "bdev_nvme_apply_firmware", 00:06:11.062 "bdev_nvme_detach_controller", 00:06:11.063 "bdev_nvme_get_controllers", 00:06:11.063 "bdev_nvme_attach_controller", 00:06:11.063 "bdev_nvme_set_hotplug", 00:06:11.063 "bdev_nvme_set_options", 00:06:11.063 "bdev_passthru_delete", 00:06:11.063 "bdev_passthru_create", 00:06:11.063 "bdev_lvol_set_parent_bdev", 00:06:11.063 "bdev_lvol_set_parent", 00:06:11.063 "bdev_lvol_check_shallow_copy", 00:06:11.063 "bdev_lvol_start_shallow_copy", 00:06:11.063 "bdev_lvol_grow_lvstore", 00:06:11.063 "bdev_lvol_get_lvols", 00:06:11.063 "bdev_lvol_get_lvstores", 00:06:11.063 "bdev_lvol_delete", 00:06:11.063 "bdev_lvol_set_read_only", 00:06:11.063 "bdev_lvol_resize", 00:06:11.063 "bdev_lvol_decouple_parent", 00:06:11.063 "bdev_lvol_inflate", 00:06:11.063 "bdev_lvol_rename", 00:06:11.063 "bdev_lvol_clone_bdev", 00:06:11.063 "bdev_lvol_clone", 00:06:11.063 "bdev_lvol_snapshot", 00:06:11.063 "bdev_lvol_create", 00:06:11.063 "bdev_lvol_delete_lvstore", 00:06:11.063 "bdev_lvol_rename_lvstore", 00:06:11.063 "bdev_lvol_create_lvstore", 00:06:11.063 "bdev_raid_set_options", 00:06:11.063 "bdev_raid_remove_base_bdev", 00:06:11.063 "bdev_raid_add_base_bdev", 00:06:11.063 "bdev_raid_delete", 00:06:11.063 "bdev_raid_create", 00:06:11.063 "bdev_raid_get_bdevs", 00:06:11.063 "bdev_error_inject_error", 00:06:11.063 "bdev_error_delete", 00:06:11.063 "bdev_error_create", 00:06:11.063 "bdev_split_delete", 00:06:11.063 "bdev_split_create", 00:06:11.063 "bdev_delay_delete", 00:06:11.063 "bdev_delay_create", 00:06:11.063 "bdev_delay_update_latency", 00:06:11.063 "bdev_zone_block_delete", 00:06:11.063 "bdev_zone_block_create", 00:06:11.063 "blobfs_create", 00:06:11.063 "blobfs_detect", 00:06:11.063 "blobfs_set_cache_size", 00:06:11.063 "bdev_xnvme_delete", 00:06:11.063 "bdev_xnvme_create", 00:06:11.063 "bdev_aio_delete", 00:06:11.063 "bdev_aio_rescan", 00:06:11.063 "bdev_aio_create", 00:06:11.063 "bdev_ftl_set_property", 00:06:11.063 "bdev_ftl_get_properties", 00:06:11.063 "bdev_ftl_get_stats", 00:06:11.063 "bdev_ftl_unmap", 00:06:11.063 "bdev_ftl_unload", 00:06:11.063 "bdev_ftl_delete", 00:06:11.063 "bdev_ftl_load", 00:06:11.063 "bdev_ftl_create", 00:06:11.063 "bdev_virtio_attach_controller", 00:06:11.063 "bdev_virtio_scsi_get_devices", 00:06:11.063 "bdev_virtio_detach_controller", 00:06:11.063 "bdev_virtio_blk_set_hotplug", 00:06:11.063 "bdev_iscsi_delete", 00:06:11.063 "bdev_iscsi_create", 00:06:11.063 "bdev_iscsi_set_options", 00:06:11.063 "accel_error_inject_error", 00:06:11.063 "ioat_scan_accel_module", 00:06:11.063 "dsa_scan_accel_module", 00:06:11.063 "iaa_scan_accel_module", 00:06:11.063 "keyring_file_remove_key", 00:06:11.063 "keyring_file_add_key", 00:06:11.063 "keyring_linux_set_options", 00:06:11.063 "fsdev_aio_delete", 00:06:11.063 "fsdev_aio_create", 00:06:11.063 "iscsi_get_histogram", 00:06:11.063 "iscsi_enable_histogram", 00:06:11.063 "iscsi_set_options", 00:06:11.063 "iscsi_get_auth_groups", 00:06:11.063 "iscsi_auth_group_remove_secret", 00:06:11.063 "iscsi_auth_group_add_secret", 00:06:11.063 "iscsi_delete_auth_group", 00:06:11.063 "iscsi_create_auth_group", 00:06:11.063 "iscsi_set_discovery_auth", 00:06:11.063 "iscsi_get_options", 00:06:11.063 "iscsi_target_node_request_logout", 00:06:11.063 "iscsi_target_node_set_redirect", 00:06:11.063 "iscsi_target_node_set_auth", 00:06:11.063 "iscsi_target_node_add_lun", 00:06:11.063 "iscsi_get_stats", 00:06:11.063 "iscsi_get_connections", 00:06:11.063 "iscsi_portal_group_set_auth", 00:06:11.063 "iscsi_start_portal_group", 00:06:11.063 "iscsi_delete_portal_group", 00:06:11.063 "iscsi_create_portal_group", 00:06:11.063 "iscsi_get_portal_groups", 00:06:11.063 "iscsi_delete_target_node", 00:06:11.063 "iscsi_target_node_remove_pg_ig_maps", 00:06:11.063 "iscsi_target_node_add_pg_ig_maps", 00:06:11.063 "iscsi_create_target_node", 00:06:11.063 "iscsi_get_target_nodes", 00:06:11.063 "iscsi_delete_initiator_group", 00:06:11.063 "iscsi_initiator_group_remove_initiators", 00:06:11.063 "iscsi_initiator_group_add_initiators", 00:06:11.063 "iscsi_create_initiator_group", 00:06:11.063 "iscsi_get_initiator_groups", 00:06:11.063 "nvmf_set_crdt", 00:06:11.063 "nvmf_set_config", 00:06:11.063 "nvmf_set_max_subsystems", 00:06:11.063 "nvmf_stop_mdns_prr", 00:06:11.063 "nvmf_publish_mdns_prr", 00:06:11.063 "nvmf_subsystem_get_listeners", 00:06:11.063 "nvmf_subsystem_get_qpairs", 00:06:11.063 "nvmf_subsystem_get_controllers", 00:06:11.063 "nvmf_get_stats", 00:06:11.063 "nvmf_get_transports", 00:06:11.063 "nvmf_create_transport", 00:06:11.063 "nvmf_get_targets", 00:06:11.063 "nvmf_delete_target", 00:06:11.063 "nvmf_create_target", 00:06:11.063 "nvmf_subsystem_allow_any_host", 00:06:11.063 "nvmf_subsystem_set_keys", 00:06:11.063 "nvmf_subsystem_remove_host", 00:06:11.063 "nvmf_subsystem_add_host", 00:06:11.063 "nvmf_ns_remove_host", 00:06:11.063 "nvmf_ns_add_host", 00:06:11.063 "nvmf_subsystem_remove_ns", 00:06:11.063 "nvmf_subsystem_set_ns_ana_group", 00:06:11.063 "nvmf_subsystem_add_ns", 00:06:11.063 "nvmf_subsystem_listener_set_ana_state", 00:06:11.063 "nvmf_discovery_get_referrals", 00:06:11.063 "nvmf_discovery_remove_referral", 00:06:11.063 "nvmf_discovery_add_referral", 00:06:11.063 "nvmf_subsystem_remove_listener", 00:06:11.063 "nvmf_subsystem_add_listener", 00:06:11.063 "nvmf_delete_subsystem", 00:06:11.063 "nvmf_create_subsystem", 00:06:11.063 "nvmf_get_subsystems", 00:06:11.063 "env_dpdk_get_mem_stats", 00:06:11.063 "nbd_get_disks", 00:06:11.063 "nbd_stop_disk", 00:06:11.063 "nbd_start_disk", 00:06:11.063 "ublk_recover_disk", 00:06:11.063 "ublk_get_disks", 00:06:11.063 "ublk_stop_disk", 00:06:11.063 "ublk_start_disk", 00:06:11.063 "ublk_destroy_target", 00:06:11.063 "ublk_create_target", 00:06:11.063 "virtio_blk_create_transport", 00:06:11.063 "virtio_blk_get_transports", 00:06:11.063 "vhost_controller_set_coalescing", 00:06:11.063 "vhost_get_controllers", 00:06:11.063 "vhost_delete_controller", 00:06:11.063 "vhost_create_blk_controller", 00:06:11.063 "vhost_scsi_controller_remove_target", 00:06:11.063 "vhost_scsi_controller_add_target", 00:06:11.063 "vhost_start_scsi_controller", 00:06:11.063 "vhost_create_scsi_controller", 00:06:11.063 "thread_set_cpumask", 00:06:11.063 "scheduler_set_options", 00:06:11.063 "framework_get_governor", 00:06:11.063 "framework_get_scheduler", 00:06:11.063 "framework_set_scheduler", 00:06:11.063 "framework_get_reactors", 00:06:11.063 "thread_get_io_channels", 00:06:11.063 "thread_get_pollers", 00:06:11.063 "thread_get_stats", 00:06:11.063 "framework_monitor_context_switch", 00:06:11.063 "spdk_kill_instance", 00:06:11.063 "log_enable_timestamps", 00:06:11.063 "log_get_flags", 00:06:11.063 "log_clear_flag", 00:06:11.063 "log_set_flag", 00:06:11.063 "log_get_level", 00:06:11.063 "log_set_level", 00:06:11.063 "log_get_print_level", 00:06:11.063 "log_set_print_level", 00:06:11.063 "framework_enable_cpumask_locks", 00:06:11.063 "framework_disable_cpumask_locks", 00:06:11.063 "framework_wait_init", 00:06:11.063 "framework_start_init", 00:06:11.063 "scsi_get_devices", 00:06:11.063 "bdev_get_histogram", 00:06:11.063 "bdev_enable_histogram", 00:06:11.063 "bdev_set_qos_limit", 00:06:11.063 "bdev_set_qd_sampling_period", 00:06:11.063 "bdev_get_bdevs", 00:06:11.063 "bdev_reset_iostat", 00:06:11.063 "bdev_get_iostat", 00:06:11.063 "bdev_examine", 00:06:11.063 "bdev_wait_for_examine", 00:06:11.063 "bdev_set_options", 00:06:11.063 "accel_get_stats", 00:06:11.063 "accel_set_options", 00:06:11.063 "accel_set_driver", 00:06:11.063 "accel_crypto_key_destroy", 00:06:11.063 "accel_crypto_keys_get", 00:06:11.063 "accel_crypto_key_create", 00:06:11.063 "accel_assign_opc", 00:06:11.063 "accel_get_module_info", 00:06:11.063 "accel_get_opc_assignments", 00:06:11.063 "vmd_rescan", 00:06:11.063 "vmd_remove_device", 00:06:11.063 "vmd_enable", 00:06:11.063 "sock_get_default_impl", 00:06:11.063 "sock_set_default_impl", 00:06:11.063 "sock_impl_set_options", 00:06:11.063 "sock_impl_get_options", 00:06:11.063 "iobuf_get_stats", 00:06:11.063 "iobuf_set_options", 00:06:11.063 "keyring_get_keys", 00:06:11.063 "framework_get_pci_devices", 00:06:11.063 "framework_get_config", 00:06:11.063 "framework_get_subsystems", 00:06:11.063 "fsdev_set_opts", 00:06:11.063 "fsdev_get_opts", 00:06:11.063 "trace_get_info", 00:06:11.063 "trace_get_tpoint_group_mask", 00:06:11.063 "trace_disable_tpoint_group", 00:06:11.063 "trace_enable_tpoint_group", 00:06:11.063 "trace_clear_tpoint_mask", 00:06:11.063 "trace_set_tpoint_mask", 00:06:11.063 "notify_get_notifications", 00:06:11.063 "notify_get_types", 00:06:11.063 "spdk_get_version", 00:06:11.063 "rpc_get_methods" 00:06:11.063 ] 00:06:11.063 13:19:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:11.063 13:19:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:11.063 13:19:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.063 13:19:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:11.063 13:19:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70273 00:06:11.063 13:19:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70273 ']' 00:06:11.063 13:19:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70273 00:06:11.063 13:19:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:11.064 13:19:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.064 13:19:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70273 00:06:11.064 killing process with pid 70273 00:06:11.064 13:19:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.064 13:19:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.064 13:19:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70273' 00:06:11.064 13:19:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70273 00:06:11.064 13:19:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70273 00:06:11.323 00:06:11.323 real 0m1.575s 00:06:11.323 user 0m2.810s 00:06:11.323 sys 0m0.411s 00:06:11.323 13:19:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.323 13:19:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.323 ************************************ 00:06:11.323 END TEST spdkcli_tcp 00:06:11.323 ************************************ 00:06:11.324 13:19:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.324 13:19:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.324 13:19:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.324 13:19:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.324 ************************************ 00:06:11.324 START TEST dpdk_mem_utility 00:06:11.324 ************************************ 00:06:11.324 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.324 * Looking for test storage... 00:06:11.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:11.324 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.324 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.324 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.583 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.583 13:19:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:11.583 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.583 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.583 --rc genhtml_branch_coverage=1 00:06:11.583 --rc genhtml_function_coverage=1 00:06:11.583 --rc genhtml_legend=1 00:06:11.583 --rc geninfo_all_blocks=1 00:06:11.583 --rc geninfo_unexecuted_blocks=1 00:06:11.583 00:06:11.583 ' 00:06:11.583 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.583 --rc genhtml_branch_coverage=1 00:06:11.583 --rc genhtml_function_coverage=1 00:06:11.583 --rc genhtml_legend=1 00:06:11.583 --rc geninfo_all_blocks=1 00:06:11.583 --rc geninfo_unexecuted_blocks=1 00:06:11.583 00:06:11.583 ' 00:06:11.583 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.583 --rc genhtml_branch_coverage=1 00:06:11.583 --rc genhtml_function_coverage=1 00:06:11.583 --rc genhtml_legend=1 00:06:11.583 --rc geninfo_all_blocks=1 00:06:11.583 --rc geninfo_unexecuted_blocks=1 00:06:11.583 00:06:11.583 ' 00:06:11.584 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.584 --rc genhtml_branch_coverage=1 00:06:11.584 --rc genhtml_function_coverage=1 00:06:11.584 --rc genhtml_legend=1 00:06:11.584 --rc geninfo_all_blocks=1 00:06:11.584 --rc geninfo_unexecuted_blocks=1 00:06:11.584 00:06:11.584 ' 00:06:11.584 13:19:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:11.584 13:19:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70373 00:06:11.584 13:19:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70373 00:06:11.584 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70373 ']' 00:06:11.584 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.584 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.584 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.584 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.584 13:19:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.584 13:19:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.584 [2024-11-18 13:19:07.583072] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:11.584 [2024-11-18 13:19:07.583220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70373 ] 00:06:11.842 [2024-11-18 13:19:07.741550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.842 [2024-11-18 13:19:07.761196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.410 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.410 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:12.410 13:19:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:12.411 13:19:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:12.411 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.411 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.411 { 00:06:12.411 "filename": "/tmp/spdk_mem_dump.txt" 00:06:12.411 } 00:06:12.411 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.411 13:19:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:12.411 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:12.411 1 heaps totaling size 810.000000 MiB 00:06:12.411 size: 810.000000 MiB heap id: 0 00:06:12.411 end heaps---------- 00:06:12.411 9 mempools totaling size 595.772034 MiB 00:06:12.411 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:12.411 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:12.411 size: 92.545471 MiB name: bdev_io_70373 00:06:12.411 size: 50.003479 MiB name: msgpool_70373 00:06:12.411 size: 36.509338 MiB name: fsdev_io_70373 00:06:12.411 size: 21.763794 MiB name: PDU_Pool 00:06:12.411 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:12.411 size: 4.133484 MiB name: evtpool_70373 00:06:12.411 size: 0.026123 MiB name: Session_Pool 00:06:12.411 end mempools------- 00:06:12.411 6 memzones totaling size 4.142822 MiB 00:06:12.411 size: 1.000366 MiB name: RG_ring_0_70373 00:06:12.411 size: 1.000366 MiB name: RG_ring_1_70373 00:06:12.411 size: 1.000366 MiB name: RG_ring_4_70373 00:06:12.411 size: 1.000366 MiB name: RG_ring_5_70373 00:06:12.411 size: 0.125366 MiB name: RG_ring_2_70373 00:06:12.411 size: 0.015991 MiB name: RG_ring_3_70373 00:06:12.411 end memzones------- 00:06:12.411 13:19:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:12.411 heap id: 0 total size: 810.000000 MiB number of busy elements: 308 number of free elements: 15 00:06:12.411 list of free elements. size: 10.814148 MiB 00:06:12.411 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:12.411 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:12.411 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:12.411 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:12.411 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:12.411 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:12.411 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:12.411 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:12.411 element at address: 0x20001a600000 with size: 0.568604 MiB 00:06:12.411 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:12.411 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:12.411 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:12.411 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:12.411 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:12.411 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:12.411 list of standard malloc elements. size: 199.266968 MiB 00:06:12.411 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:12.411 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:12.411 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:12.411 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:12.411 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:12.411 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:12.411 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:12.411 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:12.411 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:12.411 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:12.411 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:12.411 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:12.412 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:12.412 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:12.412 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:12.413 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:12.413 list of memzone associated elements. size: 599.918884 MiB 00:06:12.413 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:12.413 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:12.413 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:12.413 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:12.413 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:12.413 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70373_0 00:06:12.413 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:12.413 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70373_0 00:06:12.413 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:12.413 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70373_0 00:06:12.413 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:12.413 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:12.413 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:12.413 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:12.413 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:12.413 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70373_0 00:06:12.413 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:12.413 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70373 00:06:12.413 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:12.413 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70373 00:06:12.413 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:12.413 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:12.413 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:12.413 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:12.413 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:12.413 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:12.413 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:12.413 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:12.413 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:12.413 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70373 00:06:12.413 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:12.413 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70373 00:06:12.413 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:12.413 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70373 00:06:12.413 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:12.413 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70373 00:06:12.413 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:12.413 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70373 00:06:12.413 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:12.413 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70373 00:06:12.413 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:12.413 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:12.413 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:12.413 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:12.413 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:12.413 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:12.413 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:12.413 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70373 00:06:12.413 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:12.413 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70373 00:06:12.413 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:12.413 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:12.413 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:12.413 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:12.413 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:12.413 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70373 00:06:12.413 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:12.413 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:12.413 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:12.413 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70373 00:06:12.413 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:12.413 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70373 00:06:12.413 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:12.413 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70373 00:06:12.413 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:12.413 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:12.413 13:19:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:12.413 13:19:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70373 00:06:12.413 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70373 ']' 00:06:12.413 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70373 00:06:12.413 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:12.413 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.413 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70373 00:06:12.672 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.672 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.672 killing process with pid 70373 00:06:12.672 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70373' 00:06:12.672 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70373 00:06:12.672 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70373 00:06:12.930 00:06:12.930 real 0m1.451s 00:06:12.930 user 0m1.510s 00:06:12.930 sys 0m0.350s 00:06:12.930 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.930 13:19:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.930 ************************************ 00:06:12.930 END TEST dpdk_mem_utility 00:06:12.930 ************************************ 00:06:12.930 13:19:08 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.930 13:19:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.930 13:19:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.930 13:19:08 -- common/autotest_common.sh@10 -- # set +x 00:06:12.930 ************************************ 00:06:12.930 START TEST event 00:06:12.930 ************************************ 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.930 * Looking for test storage... 00:06:12.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.930 13:19:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.930 13:19:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.930 13:19:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.930 13:19:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.930 13:19:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.930 13:19:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.930 13:19:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.930 13:19:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.930 13:19:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.930 13:19:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.930 13:19:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.930 13:19:08 event -- scripts/common.sh@344 -- # case "$op" in 00:06:12.930 13:19:08 event -- scripts/common.sh@345 -- # : 1 00:06:12.930 13:19:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.930 13:19:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.930 13:19:08 event -- scripts/common.sh@365 -- # decimal 1 00:06:12.930 13:19:08 event -- scripts/common.sh@353 -- # local d=1 00:06:12.930 13:19:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.930 13:19:08 event -- scripts/common.sh@355 -- # echo 1 00:06:12.930 13:19:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.930 13:19:08 event -- scripts/common.sh@366 -- # decimal 2 00:06:12.930 13:19:08 event -- scripts/common.sh@353 -- # local d=2 00:06:12.930 13:19:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.930 13:19:08 event -- scripts/common.sh@355 -- # echo 2 00:06:12.930 13:19:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.930 13:19:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.930 13:19:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.930 13:19:08 event -- scripts/common.sh@368 -- # return 0 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.930 --rc genhtml_branch_coverage=1 00:06:12.930 --rc genhtml_function_coverage=1 00:06:12.930 --rc genhtml_legend=1 00:06:12.930 --rc geninfo_all_blocks=1 00:06:12.930 --rc geninfo_unexecuted_blocks=1 00:06:12.930 00:06:12.930 ' 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.930 --rc genhtml_branch_coverage=1 00:06:12.930 --rc genhtml_function_coverage=1 00:06:12.930 --rc genhtml_legend=1 00:06:12.930 --rc geninfo_all_blocks=1 00:06:12.930 --rc geninfo_unexecuted_blocks=1 00:06:12.930 00:06:12.930 ' 00:06:12.930 13:19:08 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.931 --rc genhtml_branch_coverage=1 00:06:12.931 --rc genhtml_function_coverage=1 00:06:12.931 --rc genhtml_legend=1 00:06:12.931 --rc geninfo_all_blocks=1 00:06:12.931 --rc geninfo_unexecuted_blocks=1 00:06:12.931 00:06:12.931 ' 00:06:12.931 13:19:08 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.931 --rc genhtml_branch_coverage=1 00:06:12.931 --rc genhtml_function_coverage=1 00:06:12.931 --rc genhtml_legend=1 00:06:12.931 --rc geninfo_all_blocks=1 00:06:12.931 --rc geninfo_unexecuted_blocks=1 00:06:12.931 00:06:12.931 ' 00:06:12.931 13:19:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:12.931 13:19:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.931 13:19:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.931 13:19:08 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:12.931 13:19:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.931 13:19:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.931 ************************************ 00:06:12.931 START TEST event_perf 00:06:12.931 ************************************ 00:06:12.931 13:19:08 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.931 Running I/O for 1 seconds...[2024-11-18 13:19:09.007933] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:12.931 [2024-11-18 13:19:09.008051] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70448 ] 00:06:13.197 [2024-11-18 13:19:09.156955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.197 [2024-11-18 13:19:09.182131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.197 [2024-11-18 13:19:09.182270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.197 [2024-11-18 13:19:09.182288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.197 Running I/O for 1 seconds...[2024-11-18 13:19:09.182373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.132 00:06:14.132 lcore 0: 202155 00:06:14.132 lcore 1: 202158 00:06:14.132 lcore 2: 202156 00:06:14.132 lcore 3: 202153 00:06:14.132 done. 00:06:14.132 00:06:14.132 real 0m1.245s 00:06:14.132 user 0m4.044s 00:06:14.132 sys 0m0.072s 00:06:14.132 13:19:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.132 ************************************ 00:06:14.132 END TEST event_perf 00:06:14.132 ************************************ 00:06:14.132 13:19:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.390 13:19:10 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:14.390 13:19:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:14.390 13:19:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.390 13:19:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.390 ************************************ 00:06:14.390 START TEST event_reactor 00:06:14.390 ************************************ 00:06:14.390 13:19:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:14.390 [2024-11-18 13:19:10.294624] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:14.390 [2024-11-18 13:19:10.294728] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70488 ] 00:06:14.390 [2024-11-18 13:19:10.440815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.390 [2024-11-18 13:19:10.459003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.764 test_start 00:06:15.764 oneshot 00:06:15.764 tick 100 00:06:15.764 tick 100 00:06:15.764 tick 250 00:06:15.764 tick 100 00:06:15.764 tick 100 00:06:15.764 tick 100 00:06:15.764 tick 250 00:06:15.764 tick 500 00:06:15.764 tick 100 00:06:15.764 tick 100 00:06:15.764 tick 250 00:06:15.764 tick 100 00:06:15.764 tick 100 00:06:15.764 test_end 00:06:15.764 00:06:15.764 real 0m1.227s 00:06:15.764 user 0m1.068s 00:06:15.764 sys 0m0.053s 00:06:15.764 13:19:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.764 13:19:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:15.764 ************************************ 00:06:15.764 END TEST event_reactor 00:06:15.764 ************************************ 00:06:15.764 13:19:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.764 13:19:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:15.764 13:19:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.764 13:19:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.764 ************************************ 00:06:15.764 START TEST event_reactor_perf 00:06:15.764 ************************************ 00:06:15.764 13:19:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.764 [2024-11-18 13:19:11.559260] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:15.764 [2024-11-18 13:19:11.559353] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70524 ] 00:06:15.764 [2024-11-18 13:19:11.708203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.764 [2024-11-18 13:19:11.724649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.698 test_start 00:06:16.698 test_end 00:06:16.698 Performance: 422502 events per second 00:06:16.698 00:06:16.698 real 0m1.232s 00:06:16.698 user 0m1.068s 00:06:16.698 sys 0m0.057s 00:06:16.698 13:19:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.698 ************************************ 00:06:16.698 END TEST event_reactor_perf 00:06:16.698 ************************************ 00:06:16.699 13:19:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.699 13:19:12 event -- event/event.sh@49 -- # uname -s 00:06:16.699 13:19:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:16.699 13:19:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.699 13:19:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.699 13:19:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.699 13:19:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.699 ************************************ 00:06:16.699 START TEST event_scheduler 00:06:16.699 ************************************ 00:06:16.699 13:19:12 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.958 * Looking for test storage... 00:06:16.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.958 13:19:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.958 --rc genhtml_branch_coverage=1 00:06:16.958 --rc genhtml_function_coverage=1 00:06:16.958 --rc genhtml_legend=1 00:06:16.958 --rc geninfo_all_blocks=1 00:06:16.958 --rc geninfo_unexecuted_blocks=1 00:06:16.958 00:06:16.958 ' 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.958 --rc genhtml_branch_coverage=1 00:06:16.958 --rc genhtml_function_coverage=1 00:06:16.958 --rc genhtml_legend=1 00:06:16.958 --rc geninfo_all_blocks=1 00:06:16.958 --rc geninfo_unexecuted_blocks=1 00:06:16.958 00:06:16.958 ' 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.958 --rc genhtml_branch_coverage=1 00:06:16.958 --rc genhtml_function_coverage=1 00:06:16.958 --rc genhtml_legend=1 00:06:16.958 --rc geninfo_all_blocks=1 00:06:16.958 --rc geninfo_unexecuted_blocks=1 00:06:16.958 00:06:16.958 ' 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.958 --rc genhtml_branch_coverage=1 00:06:16.958 --rc genhtml_function_coverage=1 00:06:16.958 --rc genhtml_legend=1 00:06:16.958 --rc geninfo_all_blocks=1 00:06:16.958 --rc geninfo_unexecuted_blocks=1 00:06:16.958 00:06:16.958 ' 00:06:16.958 13:19:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:16.958 13:19:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70589 00:06:16.958 13:19:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.958 13:19:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70589 00:06:16.958 13:19:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70589 ']' 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.958 13:19:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.958 [2024-11-18 13:19:13.006888] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:16.958 [2024-11-18 13:19:13.007021] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70589 ] 00:06:17.216 [2024-11-18 13:19:13.165731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.216 [2024-11-18 13:19:13.187483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.216 [2024-11-18 13:19:13.187872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.216 [2024-11-18 13:19:13.187882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.216 [2024-11-18 13:19:13.187962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.782 13:19:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.782 13:19:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:17.782 13:19:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.782 13:19:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.782 13:19:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.782 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.782 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.782 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.782 POWER: Cannot set governor of lcore 0 to performance 00:06:17.782 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.782 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.782 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.782 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.782 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:17.782 POWER: Unable to set Power Management Environment for lcore 0 00:06:17.782 [2024-11-18 13:19:13.853438] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:17.782 [2024-11-18 13:19:13.853458] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:17.782 [2024-11-18 13:19:13.853467] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:17.782 [2024-11-18 13:19:13.853491] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.782 [2024-11-18 13:19:13.853498] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.782 [2024-11-18 13:19:13.853508] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.782 13:19:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.782 13:19:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.782 13:19:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.782 13:19:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 [2024-11-18 13:19:13.910607] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.043 13:19:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.043 13:19:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.043 13:19:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 ************************************ 00:06:18.043 START TEST scheduler_create_thread 00:06:18.043 ************************************ 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 2 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 3 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 4 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 5 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 6 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 7 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 8 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 9 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 10 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.043 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.611 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.611 13:19:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:18.611 13:19:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:18.611 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.611 13:19:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.982 13:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.982 00:06:19.982 real 0m1.750s 00:06:19.982 user 0m0.013s 00:06:19.982 sys 0m0.006s 00:06:19.982 13:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.982 ************************************ 00:06:19.982 END TEST scheduler_create_thread 00:06:19.982 ************************************ 00:06:19.982 13:19:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.982 13:19:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:19.982 13:19:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70589 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70589 ']' 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70589 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70589 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:19.982 killing process with pid 70589 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70589' 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70589 00:06:19.982 13:19:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70589 00:06:20.242 [2024-11-18 13:19:16.156822] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:20.242 ************************************ 00:06:20.242 END TEST event_scheduler 00:06:20.242 ************************************ 00:06:20.242 00:06:20.242 real 0m3.469s 00:06:20.242 user 0m6.089s 00:06:20.242 sys 0m0.308s 00:06:20.242 13:19:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.242 13:19:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.242 13:19:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:20.242 13:19:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:20.242 13:19:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.242 13:19:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.242 13:19:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.242 ************************************ 00:06:20.242 START TEST app_repeat 00:06:20.242 ************************************ 00:06:20.242 13:19:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70679 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70679' 00:06:20.242 Process app_repeat pid: 70679 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.242 spdk_app_start Round 0 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:20.242 13:19:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70679 /var/tmp/spdk-nbd.sock 00:06:20.242 13:19:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70679 ']' 00:06:20.242 13:19:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.242 13:19:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.242 13:19:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.242 13:19:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.242 13:19:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.242 [2024-11-18 13:19:16.361917] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:20.242 [2024-11-18 13:19:16.362027] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70679 ] 00:06:20.500 [2024-11-18 13:19:16.513881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.500 [2024-11-18 13:19:16.531746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.500 [2024-11-18 13:19:16.531847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.434 13:19:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.434 13:19:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.434 13:19:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.434 Malloc0 00:06:21.434 13:19:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.692 Malloc1 00:06:21.692 13:19:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.692 13:19:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.951 /dev/nbd0 00:06:21.951 13:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.951 13:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.951 1+0 records in 00:06:21.951 1+0 records out 00:06:21.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489158 s, 8.4 MB/s 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.951 13:19:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.951 13:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.951 13:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.951 13:19:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.209 /dev/nbd1 00:06:22.209 13:19:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.209 13:19:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.209 1+0 records in 00:06:22.209 1+0 records out 00:06:22.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260909 s, 15.7 MB/s 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.209 13:19:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.209 13:19:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.209 13:19:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.209 13:19:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.210 13:19:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.210 13:19:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.210 13:19:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.210 { 00:06:22.210 "nbd_device": "/dev/nbd0", 00:06:22.210 "bdev_name": "Malloc0" 00:06:22.210 }, 00:06:22.210 { 00:06:22.210 "nbd_device": "/dev/nbd1", 00:06:22.210 "bdev_name": "Malloc1" 00:06:22.210 } 00:06:22.210 ]' 00:06:22.210 13:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.210 { 00:06:22.210 "nbd_device": "/dev/nbd0", 00:06:22.210 "bdev_name": "Malloc0" 00:06:22.210 }, 00:06:22.210 { 00:06:22.210 "nbd_device": "/dev/nbd1", 00:06:22.210 "bdev_name": "Malloc1" 00:06:22.210 } 00:06:22.210 ]' 00:06:22.210 13:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.468 /dev/nbd1' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.468 /dev/nbd1' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.468 256+0 records in 00:06:22.468 256+0 records out 00:06:22.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416632 s, 252 MB/s 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.468 256+0 records in 00:06:22.468 256+0 records out 00:06:22.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137742 s, 76.1 MB/s 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.468 256+0 records in 00:06:22.468 256+0 records out 00:06:22.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136358 s, 76.9 MB/s 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.468 13:19:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.728 13:19:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.986 13:19:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.986 13:19:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.986 13:19:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.986 13:19:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.986 13:19:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.986 13:19:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.987 13:19:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.987 13:19:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.987 13:19:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.987 13:19:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.987 13:19:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.987 13:19:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.987 13:19:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.248 13:19:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.248 [2024-11-18 13:19:19.371947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.508 [2024-11-18 13:19:19.387617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.508 [2024-11-18 13:19:19.387842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.508 [2024-11-18 13:19:19.417405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.508 [2024-11-18 13:19:19.417464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.790 13:19:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.790 spdk_app_start Round 1 00:06:26.790 13:19:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:26.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.790 13:19:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70679 /var/tmp/spdk-nbd.sock 00:06:26.790 13:19:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70679 ']' 00:06:26.790 13:19:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.790 13:19:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.791 13:19:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.791 13:19:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.791 13:19:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.791 13:19:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.791 13:19:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.791 13:19:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.791 Malloc0 00:06:26.791 13:19:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.791 Malloc1 00:06:26.791 13:19:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.791 13:19:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.049 /dev/nbd0 00:06:27.049 13:19:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.049 13:19:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.049 1+0 records in 00:06:27.049 1+0 records out 00:06:27.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277562 s, 14.8 MB/s 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.049 13:19:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:27.049 13:19:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.049 13:19:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.049 13:19:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.324 /dev/nbd1 00:06:27.324 13:19:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.324 13:19:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.324 1+0 records in 00:06:27.324 1+0 records out 00:06:27.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266021 s, 15.4 MB/s 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.324 13:19:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:27.324 13:19:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.324 13:19:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.324 13:19:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.324 13:19:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.324 13:19:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.583 { 00:06:27.583 "nbd_device": "/dev/nbd0", 00:06:27.583 "bdev_name": "Malloc0" 00:06:27.583 }, 00:06:27.583 { 00:06:27.583 "nbd_device": "/dev/nbd1", 00:06:27.583 "bdev_name": "Malloc1" 00:06:27.583 } 00:06:27.583 ]' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.583 { 00:06:27.583 "nbd_device": "/dev/nbd0", 00:06:27.583 "bdev_name": "Malloc0" 00:06:27.583 }, 00:06:27.583 { 00:06:27.583 "nbd_device": "/dev/nbd1", 00:06:27.583 "bdev_name": "Malloc1" 00:06:27.583 } 00:06:27.583 ]' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.583 /dev/nbd1' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.583 /dev/nbd1' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.583 256+0 records in 00:06:27.583 256+0 records out 00:06:27.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111641 s, 93.9 MB/s 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.583 256+0 records in 00:06:27.583 256+0 records out 00:06:27.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181421 s, 57.8 MB/s 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.583 256+0 records in 00:06:27.583 256+0 records out 00:06:27.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194228 s, 54.0 MB/s 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.583 13:19:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.841 13:19:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.099 13:19:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.357 13:19:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.357 13:19:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.616 13:19:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.616 [2024-11-18 13:19:24.607339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.616 [2024-11-18 13:19:24.623323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.616 [2024-11-18 13:19:24.623508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.616 [2024-11-18 13:19:24.651931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.616 [2024-11-18 13:19:24.651976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.897 13:19:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.898 spdk_app_start Round 2 00:06:31.898 13:19:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.898 13:19:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70679 /var/tmp/spdk-nbd.sock 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70679 ']' 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.898 13:19:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.898 13:19:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.898 Malloc0 00:06:31.898 13:19:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.156 Malloc1 00:06:32.156 13:19:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.156 13:19:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.415 /dev/nbd0 00:06:32.415 13:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.415 13:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.415 1+0 records in 00:06:32.415 1+0 records out 00:06:32.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000137966 s, 29.7 MB/s 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.415 13:19:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.415 13:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.415 13:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.415 13:19:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.673 /dev/nbd1 00:06:32.673 13:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.673 13:19:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.673 1+0 records in 00:06:32.673 1+0 records out 00:06:32.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167274 s, 24.5 MB/s 00:06:32.673 13:19:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.674 13:19:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.674 13:19:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.674 13:19:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.674 13:19:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.674 13:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.674 13:19:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.674 13:19:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.674 13:19:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.674 13:19:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd0", 00:06:32.932 "bdev_name": "Malloc0" 00:06:32.932 }, 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd1", 00:06:32.932 "bdev_name": "Malloc1" 00:06:32.932 } 00:06:32.932 ]' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd0", 00:06:32.932 "bdev_name": "Malloc0" 00:06:32.932 }, 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd1", 00:06:32.932 "bdev_name": "Malloc1" 00:06:32.932 } 00:06:32.932 ]' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.932 /dev/nbd1' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.932 /dev/nbd1' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.932 256+0 records in 00:06:32.932 256+0 records out 00:06:32.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00914061 s, 115 MB/s 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.932 256+0 records in 00:06:32.932 256+0 records out 00:06:32.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141484 s, 74.1 MB/s 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.932 256+0 records in 00:06:32.932 256+0 records out 00:06:32.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215688 s, 48.6 MB/s 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:19:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.191 13:19:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.449 13:19:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.720 13:19:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.720 13:19:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.720 13:19:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.008 [2024-11-18 13:19:29.872232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.008 [2024-11-18 13:19:29.887782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.008 [2024-11-18 13:19:29.887954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.008 [2024-11-18 13:19:29.916653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.008 [2024-11-18 13:19:29.916697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.288 13:19:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70679 /var/tmp/spdk-nbd.sock 00:06:37.288 13:19:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70679 ']' 00:06:37.288 13:19:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.288 13:19:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.288 13:19:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.288 13:19:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.288 13:19:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:37.288 13:19:33 event.app_repeat -- event/event.sh@39 -- # killprocess 70679 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70679 ']' 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70679 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70679 00:06:37.288 killing process with pid 70679 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.288 13:19:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.289 13:19:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70679' 00:06:37.289 13:19:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70679 00:06:37.289 13:19:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70679 00:06:37.289 spdk_app_start is called in Round 0. 00:06:37.289 Shutdown signal received, stop current app iteration 00:06:37.289 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 reinitialization... 00:06:37.289 spdk_app_start is called in Round 1. 00:06:37.289 Shutdown signal received, stop current app iteration 00:06:37.289 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 reinitialization... 00:06:37.289 spdk_app_start is called in Round 2. 00:06:37.289 Shutdown signal received, stop current app iteration 00:06:37.289 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 reinitialization... 00:06:37.289 spdk_app_start is called in Round 3. 00:06:37.289 Shutdown signal received, stop current app iteration 00:06:37.289 13:19:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:37.289 13:19:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:37.289 00:06:37.289 real 0m16.816s 00:06:37.289 user 0m37.648s 00:06:37.289 sys 0m2.066s 00:06:37.289 13:19:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.289 ************************************ 00:06:37.289 END TEST app_repeat 00:06:37.289 ************************************ 00:06:37.289 13:19:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.289 13:19:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:37.289 13:19:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:37.289 13:19:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.289 13:19:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.289 13:19:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.289 ************************************ 00:06:37.289 START TEST cpu_locks 00:06:37.289 ************************************ 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:37.289 * Looking for test storage... 00:06:37.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.289 13:19:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.289 --rc genhtml_branch_coverage=1 00:06:37.289 --rc genhtml_function_coverage=1 00:06:37.289 --rc genhtml_legend=1 00:06:37.289 --rc geninfo_all_blocks=1 00:06:37.289 --rc geninfo_unexecuted_blocks=1 00:06:37.289 00:06:37.289 ' 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.289 --rc genhtml_branch_coverage=1 00:06:37.289 --rc genhtml_function_coverage=1 00:06:37.289 --rc genhtml_legend=1 00:06:37.289 --rc geninfo_all_blocks=1 00:06:37.289 --rc geninfo_unexecuted_blocks=1 00:06:37.289 00:06:37.289 ' 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.289 --rc genhtml_branch_coverage=1 00:06:37.289 --rc genhtml_function_coverage=1 00:06:37.289 --rc genhtml_legend=1 00:06:37.289 --rc geninfo_all_blocks=1 00:06:37.289 --rc geninfo_unexecuted_blocks=1 00:06:37.289 00:06:37.289 ' 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.289 --rc genhtml_branch_coverage=1 00:06:37.289 --rc genhtml_function_coverage=1 00:06:37.289 --rc genhtml_legend=1 00:06:37.289 --rc geninfo_all_blocks=1 00:06:37.289 --rc geninfo_unexecuted_blocks=1 00:06:37.289 00:06:37.289 ' 00:06:37.289 13:19:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:37.289 13:19:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:37.289 13:19:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:37.289 13:19:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:37.289 13:19:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.290 13:19:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.290 13:19:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.290 ************************************ 00:06:37.290 START TEST default_locks 00:06:37.290 ************************************ 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71098 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71098 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71098 ']' 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.290 13:19:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.290 [2024-11-18 13:19:33.388210] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:37.290 [2024-11-18 13:19:33.388328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71098 ] 00:06:37.547 [2024-11-18 13:19:33.538348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.547 [2024-11-18 13:19:33.555092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.113 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.113 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:38.113 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71098 00:06:38.113 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71098 00:06:38.113 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71098 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71098 ']' 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71098 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71098 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.372 killing process with pid 71098 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71098' 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71098 00:06:38.372 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71098 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71098 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71098 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71098 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71098 ']' 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.630 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71098) - No such process 00:06:38.630 ERROR: process (pid: 71098) is no longer running 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.630 00:06:38.630 real 0m1.295s 00:06:38.630 user 0m1.305s 00:06:38.630 sys 0m0.371s 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.630 13:19:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.630 ************************************ 00:06:38.630 END TEST default_locks 00:06:38.630 ************************************ 00:06:38.630 13:19:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.630 13:19:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.630 13:19:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.630 13:19:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.630 ************************************ 00:06:38.630 START TEST default_locks_via_rpc 00:06:38.630 ************************************ 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71146 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71146 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71146 ']' 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.630 13:19:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.630 [2024-11-18 13:19:34.727086] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:38.630 [2024-11-18 13:19:34.727220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71146 ] 00:06:38.888 [2024-11-18 13:19:34.882656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.888 [2024-11-18 13:19:34.901428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71146 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.492 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71146 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71146 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71146 ']' 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71146 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71146 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.751 killing process with pid 71146 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71146' 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71146 00:06:39.751 13:19:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71146 00:06:40.010 00:06:40.010 real 0m1.369s 00:06:40.010 user 0m1.372s 00:06:40.010 sys 0m0.414s 00:06:40.010 13:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.010 13:19:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.010 ************************************ 00:06:40.010 END TEST default_locks_via_rpc 00:06:40.010 ************************************ 00:06:40.010 13:19:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.010 13:19:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.010 13:19:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.010 13:19:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.010 ************************************ 00:06:40.010 START TEST non_locking_app_on_locked_coremask 00:06:40.010 ************************************ 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71187 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71187 /var/tmp/spdk.sock 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71187 ']' 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.010 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.010 [2024-11-18 13:19:36.133504] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:40.010 [2024-11-18 13:19:36.133617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71187 ] 00:06:40.268 [2024-11-18 13:19:36.286284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.268 [2024-11-18 13:19:36.303372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71203 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71203 /var/tmp/spdk2.sock 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71203 ']' 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.834 13:19:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 [2024-11-18 13:19:37.008519] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:41.092 [2024-11-18 13:19:37.008642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71203 ] 00:06:41.092 [2024-11-18 13:19:37.168139] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.092 [2024-11-18 13:19:37.168192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.092 [2024-11-18 13:19:37.201115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.025 13:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.025 13:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.025 13:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71187 00:06:42.025 13:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71187 00:06:42.025 13:19:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.025 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71187 00:06:42.025 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71187 ']' 00:06:42.025 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71187 00:06:42.025 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.282 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.283 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71187 00:06:42.283 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.283 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.283 killing process with pid 71187 00:06:42.283 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71187' 00:06:42.283 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71187 00:06:42.283 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71187 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71203 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71203 ']' 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71203 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71203 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71203' 00:06:42.540 killing process with pid 71203 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71203 00:06:42.540 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71203 00:06:42.797 00:06:42.797 real 0m2.780s 00:06:42.797 user 0m3.083s 00:06:42.797 sys 0m0.733s 00:06:42.797 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.797 13:19:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.797 ************************************ 00:06:42.797 END TEST non_locking_app_on_locked_coremask 00:06:42.797 ************************************ 00:06:42.797 13:19:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.797 13:19:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.797 13:19:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.797 13:19:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.797 ************************************ 00:06:42.797 START TEST locking_app_on_unlocked_coremask 00:06:42.797 ************************************ 00:06:42.797 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:42.797 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71261 00:06:42.797 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71261 /var/tmp/spdk.sock 00:06:42.797 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71261 ']' 00:06:42.797 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.798 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.798 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.798 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.798 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.798 13:19:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.054 [2024-11-18 13:19:38.951872] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:43.054 [2024-11-18 13:19:38.951980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71261 ] 00:06:43.054 [2024-11-18 13:19:39.104793] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.054 [2024-11-18 13:19:39.104837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.054 [2024-11-18 13:19:39.122051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71277 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71277 /var/tmp/spdk2.sock 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71277 ']' 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.673 13:19:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.930 [2024-11-18 13:19:39.859330] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:43.930 [2024-11-18 13:19:39.859443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71277 ] 00:06:43.930 [2024-11-18 13:19:40.020919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.930 [2024-11-18 13:19:40.053707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.864 13:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.864 13:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.864 13:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71277 00:06:44.864 13:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.864 13:19:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71277 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71261 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71261 ']' 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71261 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71261 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.122 killing process with pid 71261 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71261' 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71261 00:06:45.122 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71261 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71277 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71277 ']' 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71277 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71277 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.687 killing process with pid 71277 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71277' 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71277 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71277 00:06:45.687 00:06:45.687 real 0m2.875s 00:06:45.687 user 0m3.217s 00:06:45.687 sys 0m0.746s 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.687 ************************************ 00:06:45.687 END TEST locking_app_on_unlocked_coremask 00:06:45.687 ************************************ 00:06:45.687 13:19:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.687 13:19:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.687 13:19:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.687 13:19:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.687 ************************************ 00:06:45.687 START TEST locking_app_on_locked_coremask 00:06:45.687 ************************************ 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71335 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71335 /var/tmp/spdk.sock 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71335 ']' 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.687 13:19:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.946 [2024-11-18 13:19:41.865296] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:45.946 [2024-11-18 13:19:41.865424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:06:45.946 [2024-11-18 13:19:42.020821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.946 [2024-11-18 13:19:42.039523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71351 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71351 /var/tmp/spdk2.sock 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71351 /var/tmp/spdk2.sock 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71351 /var/tmp/spdk2.sock 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71351 ']' 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.880 13:19:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.880 [2024-11-18 13:19:42.772546] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:46.881 [2024-11-18 13:19:42.772949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71351 ] 00:06:46.881 [2024-11-18 13:19:42.946569] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71335 has claimed it. 00:06:46.881 [2024-11-18 13:19:42.946637] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.484 ERROR: process (pid: 71351) is no longer running 00:06:47.484 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71351) - No such process 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71335 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71335 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71335 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:06:47.484 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71335 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71335 00:06:47.744 killing process with pid 71335 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71335' 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71335 00:06:47.744 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71335 00:06:48.003 00:06:48.003 real 0m2.094s 00:06:48.003 user 0m2.346s 00:06:48.003 sys 0m0.500s 00:06:48.003 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.003 13:19:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.003 ************************************ 00:06:48.003 END TEST locking_app_on_locked_coremask 00:06:48.003 ************************************ 00:06:48.003 13:19:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.003 13:19:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.003 13:19:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.003 13:19:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.003 ************************************ 00:06:48.003 START TEST locking_overlapped_coremask 00:06:48.003 ************************************ 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71393 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71393 /var/tmp/spdk.sock 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71393 ']' 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.003 13:19:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.003 [2024-11-18 13:19:44.001440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:48.003 [2024-11-18 13:19:44.001563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71393 ] 00:06:48.262 [2024-11-18 13:19:44.158003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.262 [2024-11-18 13:19:44.179018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.262 [2024-11-18 13:19:44.179510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.262 [2024-11-18 13:19:44.179559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71411 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71411 /var/tmp/spdk2.sock 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71411 /var/tmp/spdk2.sock 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71411 /var/tmp/spdk2.sock 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71411 ']' 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.829 13:19:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.829 [2024-11-18 13:19:44.901857] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:48.829 [2024-11-18 13:19:44.902250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71411 ] 00:06:49.088 [2024-11-18 13:19:45.073805] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71393 has claimed it. 00:06:49.088 [2024-11-18 13:19:45.073874] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.654 ERROR: process (pid: 71411) is no longer running 00:06:49.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71411) - No such process 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71393 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71393 ']' 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71393 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71393 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71393' 00:06:49.654 killing process with pid 71393 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71393 00:06:49.654 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71393 00:06:49.913 00:06:49.913 real 0m1.894s 00:06:49.913 user 0m5.236s 00:06:49.913 sys 0m0.392s 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.913 ************************************ 00:06:49.913 END TEST locking_overlapped_coremask 00:06:49.913 ************************************ 00:06:49.913 13:19:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.913 13:19:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.913 13:19:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.913 13:19:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.913 ************************************ 00:06:49.913 START TEST locking_overlapped_coremask_via_rpc 00:06:49.913 ************************************ 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71453 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71453 /var/tmp/spdk.sock 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71453 ']' 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.913 13:19:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.913 [2024-11-18 13:19:45.933598] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:49.913 [2024-11-18 13:19:45.933735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71453 ] 00:06:50.171 [2024-11-18 13:19:46.083234] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.171 [2024-11-18 13:19:46.083428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.171 [2024-11-18 13:19:46.103859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.171 [2024-11-18 13:19:46.104106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.171 [2024-11-18 13:19:46.104140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71471 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71471 /var/tmp/spdk2.sock 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71471 ']' 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.737 13:19:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.737 [2024-11-18 13:19:46.829347] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:50.737 [2024-11-18 13:19:46.829641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71471 ] 00:06:50.996 [2024-11-18 13:19:47.001611] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.996 [2024-11-18 13:19:47.001660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.996 [2024-11-18 13:19:47.042234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.997 [2024-11-18 13:19:47.045247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.997 [2024-11-18 13:19:47.045275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.574 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.834 [2024-11-18 13:19:47.709355] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71453 has claimed it. 00:06:51.834 request: 00:06:51.834 { 00:06:51.834 "method": "framework_enable_cpumask_locks", 00:06:51.834 "req_id": 1 00:06:51.834 } 00:06:51.834 Got JSON-RPC error response 00:06:51.834 response: 00:06:51.834 { 00:06:51.834 "code": -32603, 00:06:51.834 "message": "Failed to claim CPU core: 2" 00:06:51.834 } 00:06:51.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71453 /var/tmp/spdk.sock 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71453 ']' 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71471 /var/tmp/spdk2.sock 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71471 ']' 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.834 13:19:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.093 ************************************ 00:06:52.093 END TEST locking_overlapped_coremask_via_rpc 00:06:52.093 ************************************ 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.093 00:06:52.093 real 0m2.306s 00:06:52.093 user 0m1.117s 00:06:52.093 sys 0m0.117s 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.093 13:19:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.093 13:19:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.093 13:19:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71453 ]] 00:06:52.093 13:19:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71453 00:06:52.093 13:19:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71453 ']' 00:06:52.093 13:19:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71453 00:06:52.093 13:19:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.093 13:19:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.093 13:19:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71453 00:06:52.352 killing process with pid 71453 00:06:52.352 13:19:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.352 13:19:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.352 13:19:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71453' 00:06:52.352 13:19:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71453 00:06:52.352 13:19:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71453 00:06:52.613 13:19:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71471 ]] 00:06:52.613 13:19:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71471 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71471 ']' 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71471 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71471 00:06:52.613 killing process with pid 71471 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71471' 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71471 00:06:52.613 13:19:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71471 00:06:52.875 13:19:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.875 13:19:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.875 13:19:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71453 ]] 00:06:52.875 13:19:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71453 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71453 ']' 00:06:52.875 Process with pid 71453 is not found 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71453 00:06:52.875 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71453) - No such process 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71453 is not found' 00:06:52.875 13:19:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71471 ]] 00:06:52.875 13:19:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71471 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71471 ']' 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71471 00:06:52.875 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71471) - No such process 00:06:52.875 Process with pid 71471 is not found 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71471 is not found' 00:06:52.875 13:19:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.875 00:06:52.875 real 0m15.619s 00:06:52.875 user 0m28.260s 00:06:52.875 sys 0m4.021s 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.875 13:19:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 ************************************ 00:06:52.875 END TEST cpu_locks 00:06:52.875 ************************************ 00:06:52.875 ************************************ 00:06:52.875 END TEST event 00:06:52.875 ************************************ 00:06:52.875 00:06:52.875 real 0m39.992s 00:06:52.875 user 1m18.345s 00:06:52.875 sys 0m6.789s 00:06:52.875 13:19:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.875 13:19:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 13:19:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.875 13:19:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.875 13:19:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.875 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 ************************************ 00:06:52.875 START TEST thread 00:06:52.875 ************************************ 00:06:52.875 13:19:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.875 * Looking for test storage... 00:06:52.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:52.875 13:19:48 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.875 13:19:48 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.875 13:19:48 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.136 13:19:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.136 13:19:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.136 13:19:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.136 13:19:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.136 13:19:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.136 13:19:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.136 13:19:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.136 13:19:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.136 13:19:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.136 13:19:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.136 13:19:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.136 13:19:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:53.136 13:19:49 thread -- scripts/common.sh@345 -- # : 1 00:06:53.136 13:19:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.136 13:19:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.136 13:19:49 thread -- scripts/common.sh@365 -- # decimal 1 00:06:53.136 13:19:49 thread -- scripts/common.sh@353 -- # local d=1 00:06:53.136 13:19:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.136 13:19:49 thread -- scripts/common.sh@355 -- # echo 1 00:06:53.136 13:19:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.136 13:19:49 thread -- scripts/common.sh@366 -- # decimal 2 00:06:53.136 13:19:49 thread -- scripts/common.sh@353 -- # local d=2 00:06:53.136 13:19:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.136 13:19:49 thread -- scripts/common.sh@355 -- # echo 2 00:06:53.136 13:19:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.136 13:19:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.136 13:19:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.136 13:19:49 thread -- scripts/common.sh@368 -- # return 0 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.136 --rc genhtml_branch_coverage=1 00:06:53.136 --rc genhtml_function_coverage=1 00:06:53.136 --rc genhtml_legend=1 00:06:53.136 --rc geninfo_all_blocks=1 00:06:53.136 --rc geninfo_unexecuted_blocks=1 00:06:53.136 00:06:53.136 ' 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.136 --rc genhtml_branch_coverage=1 00:06:53.136 --rc genhtml_function_coverage=1 00:06:53.136 --rc genhtml_legend=1 00:06:53.136 --rc geninfo_all_blocks=1 00:06:53.136 --rc geninfo_unexecuted_blocks=1 00:06:53.136 00:06:53.136 ' 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.136 --rc genhtml_branch_coverage=1 00:06:53.136 --rc genhtml_function_coverage=1 00:06:53.136 --rc genhtml_legend=1 00:06:53.136 --rc geninfo_all_blocks=1 00:06:53.136 --rc geninfo_unexecuted_blocks=1 00:06:53.136 00:06:53.136 ' 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.136 --rc genhtml_branch_coverage=1 00:06:53.136 --rc genhtml_function_coverage=1 00:06:53.136 --rc genhtml_legend=1 00:06:53.136 --rc geninfo_all_blocks=1 00:06:53.136 --rc geninfo_unexecuted_blocks=1 00:06:53.136 00:06:53.136 ' 00:06:53.136 13:19:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.136 13:19:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.136 ************************************ 00:06:53.136 START TEST thread_poller_perf 00:06:53.136 ************************************ 00:06:53.136 13:19:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.136 [2024-11-18 13:19:49.058101] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:53.136 [2024-11-18 13:19:49.058233] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71598 ] 00:06:53.136 [2024-11-18 13:19:49.207298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.136 [2024-11-18 13:19:49.227147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.136 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.520 [2024-11-18T13:19:50.648Z] ====================================== 00:06:54.520 [2024-11-18T13:19:50.648Z] busy:2614475038 (cyc) 00:06:54.520 [2024-11-18T13:19:50.648Z] total_run_count: 306000 00:06:54.520 [2024-11-18T13:19:50.648Z] tsc_hz: 2600000000 (cyc) 00:06:54.520 [2024-11-18T13:19:50.648Z] ====================================== 00:06:54.520 [2024-11-18T13:19:50.648Z] poller_cost: 8544 (cyc), 3286 (nsec) 00:06:54.520 00:06:54.520 real 0m1.250s 00:06:54.520 user 0m1.078s 00:06:54.520 sys 0m0.065s 00:06:54.520 13:19:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.520 13:19:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.520 ************************************ 00:06:54.520 END TEST thread_poller_perf 00:06:54.520 ************************************ 00:06:54.520 13:19:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.520 13:19:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:54.520 13:19:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.520 13:19:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.520 ************************************ 00:06:54.520 START TEST thread_poller_perf 00:06:54.520 ************************************ 00:06:54.520 13:19:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.520 [2024-11-18 13:19:50.367654] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:54.520 [2024-11-18 13:19:50.367904] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71629 ] 00:06:54.520 [2024-11-18 13:19:50.526215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.520 [2024-11-18 13:19:50.545302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.520 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.462 [2024-11-18T13:19:51.590Z] ====================================== 00:06:55.462 [2024-11-18T13:19:51.590Z] busy:2603171208 (cyc) 00:06:55.462 [2024-11-18T13:19:51.590Z] total_run_count: 3974000 00:06:55.462 [2024-11-18T13:19:51.590Z] tsc_hz: 2600000000 (cyc) 00:06:55.462 [2024-11-18T13:19:51.590Z] ====================================== 00:06:55.462 [2024-11-18T13:19:51.590Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:55.723 00:06:55.723 real 0m1.257s 00:06:55.723 user 0m1.085s 00:06:55.723 sys 0m0.065s 00:06:55.723 ************************************ 00:06:55.723 END TEST thread_poller_perf 00:06:55.723 ************************************ 00:06:55.723 13:19:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.723 13:19:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 13:19:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.723 ************************************ 00:06:55.723 END TEST thread 00:06:55.723 ************************************ 00:06:55.723 00:06:55.723 real 0m2.760s 00:06:55.723 user 0m2.282s 00:06:55.723 sys 0m0.246s 00:06:55.723 13:19:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.723 13:19:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 13:19:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:55.723 13:19:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.723 13:19:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.723 13:19:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.723 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:06:55.723 ************************************ 00:06:55.723 START TEST app_cmdline 00:06:55.723 ************************************ 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.723 * Looking for test storage... 00:06:55.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:55.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.723 13:19:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.723 --rc genhtml_branch_coverage=1 00:06:55.723 --rc genhtml_function_coverage=1 00:06:55.723 --rc genhtml_legend=1 00:06:55.723 --rc geninfo_all_blocks=1 00:06:55.723 --rc geninfo_unexecuted_blocks=1 00:06:55.723 00:06:55.723 ' 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.723 --rc genhtml_branch_coverage=1 00:06:55.723 --rc genhtml_function_coverage=1 00:06:55.723 --rc genhtml_legend=1 00:06:55.723 --rc geninfo_all_blocks=1 00:06:55.723 --rc geninfo_unexecuted_blocks=1 00:06:55.723 00:06:55.723 ' 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.723 --rc genhtml_branch_coverage=1 00:06:55.723 --rc genhtml_function_coverage=1 00:06:55.723 --rc genhtml_legend=1 00:06:55.723 --rc geninfo_all_blocks=1 00:06:55.723 --rc geninfo_unexecuted_blocks=1 00:06:55.723 00:06:55.723 ' 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.723 --rc genhtml_branch_coverage=1 00:06:55.723 --rc genhtml_function_coverage=1 00:06:55.723 --rc genhtml_legend=1 00:06:55.723 --rc geninfo_all_blocks=1 00:06:55.723 --rc geninfo_unexecuted_blocks=1 00:06:55.723 00:06:55.723 ' 00:06:55.723 13:19:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.723 13:19:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71714 00:06:55.723 13:19:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71714 00:06:55.723 13:19:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71714 ']' 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.723 13:19:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.984 [2024-11-18 13:19:51.928509] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:55.984 [2024-11-18 13:19:51.928833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71714 ] 00:06:55.984 [2024-11-18 13:19:52.089892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.984 [2024-11-18 13:19:52.109286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:56.925 { 00:06:56.925 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:56.925 "fields": { 00:06:56.925 "major": 25, 00:06:56.925 "minor": 1, 00:06:56.925 "patch": 0, 00:06:56.925 "suffix": "-pre", 00:06:56.925 "commit": "d47eb51c9" 00:06:56.925 } 00:06:56.925 } 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.925 13:19:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:56.925 13:19:52 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.187 request: 00:06:57.187 { 00:06:57.187 "method": "env_dpdk_get_mem_stats", 00:06:57.187 "req_id": 1 00:06:57.187 } 00:06:57.187 Got JSON-RPC error response 00:06:57.187 response: 00:06:57.187 { 00:06:57.187 "code": -32601, 00:06:57.187 "message": "Method not found" 00:06:57.187 } 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.187 13:19:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71714 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71714 ']' 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71714 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71714 00:06:57.187 killing process with pid 71714 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71714' 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 71714 00:06:57.187 13:19:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 71714 00:06:57.448 ************************************ 00:06:57.448 END TEST app_cmdline 00:06:57.448 ************************************ 00:06:57.448 00:06:57.448 real 0m1.772s 00:06:57.448 user 0m2.100s 00:06:57.448 sys 0m0.412s 00:06:57.448 13:19:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.448 13:19:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.448 13:19:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.448 13:19:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.448 13:19:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.448 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:06:57.448 ************************************ 00:06:57.448 START TEST version 00:06:57.448 ************************************ 00:06:57.448 13:19:53 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.709 * Looking for test storage... 00:06:57.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.709 13:19:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.709 13:19:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.709 13:19:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.709 13:19:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.709 13:19:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.709 13:19:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.709 13:19:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.709 13:19:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.709 13:19:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.709 13:19:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.709 13:19:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.709 13:19:53 version -- scripts/common.sh@344 -- # case "$op" in 00:06:57.709 13:19:53 version -- scripts/common.sh@345 -- # : 1 00:06:57.709 13:19:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.709 13:19:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.709 13:19:53 version -- scripts/common.sh@365 -- # decimal 1 00:06:57.709 13:19:53 version -- scripts/common.sh@353 -- # local d=1 00:06:57.709 13:19:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.709 13:19:53 version -- scripts/common.sh@355 -- # echo 1 00:06:57.709 13:19:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.709 13:19:53 version -- scripts/common.sh@366 -- # decimal 2 00:06:57.709 13:19:53 version -- scripts/common.sh@353 -- # local d=2 00:06:57.709 13:19:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.709 13:19:53 version -- scripts/common.sh@355 -- # echo 2 00:06:57.709 13:19:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.709 13:19:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.709 13:19:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.709 13:19:53 version -- scripts/common.sh@368 -- # return 0 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.709 --rc genhtml_branch_coverage=1 00:06:57.709 --rc genhtml_function_coverage=1 00:06:57.709 --rc genhtml_legend=1 00:06:57.709 --rc geninfo_all_blocks=1 00:06:57.709 --rc geninfo_unexecuted_blocks=1 00:06:57.709 00:06:57.709 ' 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.709 --rc genhtml_branch_coverage=1 00:06:57.709 --rc genhtml_function_coverage=1 00:06:57.709 --rc genhtml_legend=1 00:06:57.709 --rc geninfo_all_blocks=1 00:06:57.709 --rc geninfo_unexecuted_blocks=1 00:06:57.709 00:06:57.709 ' 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.709 --rc genhtml_branch_coverage=1 00:06:57.709 --rc genhtml_function_coverage=1 00:06:57.709 --rc genhtml_legend=1 00:06:57.709 --rc geninfo_all_blocks=1 00:06:57.709 --rc geninfo_unexecuted_blocks=1 00:06:57.709 00:06:57.709 ' 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.709 --rc genhtml_branch_coverage=1 00:06:57.709 --rc genhtml_function_coverage=1 00:06:57.709 --rc genhtml_legend=1 00:06:57.709 --rc geninfo_all_blocks=1 00:06:57.709 --rc geninfo_unexecuted_blocks=1 00:06:57.709 00:06:57.709 ' 00:06:57.709 13:19:53 version -- app/version.sh@17 -- # get_header_version major 00:06:57.709 13:19:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # cut -f2 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.709 13:19:53 version -- app/version.sh@17 -- # major=25 00:06:57.709 13:19:53 version -- app/version.sh@18 -- # get_header_version minor 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # cut -f2 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.709 13:19:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.709 13:19:53 version -- app/version.sh@18 -- # minor=1 00:06:57.709 13:19:53 version -- app/version.sh@19 -- # get_header_version patch 00:06:57.709 13:19:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # cut -f2 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.709 13:19:53 version -- app/version.sh@19 -- # patch=0 00:06:57.709 13:19:53 version -- app/version.sh@20 -- # get_header_version suffix 00:06:57.709 13:19:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # cut -f2 00:06:57.709 13:19:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.709 13:19:53 version -- app/version.sh@20 -- # suffix=-pre 00:06:57.709 13:19:53 version -- app/version.sh@22 -- # version=25.1 00:06:57.709 13:19:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:57.709 13:19:53 version -- app/version.sh@28 -- # version=25.1rc0 00:06:57.709 13:19:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:57.709 13:19:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:57.709 13:19:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:57.709 13:19:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:57.709 00:06:57.709 real 0m0.203s 00:06:57.709 user 0m0.131s 00:06:57.709 sys 0m0.095s 00:06:57.709 ************************************ 00:06:57.709 END TEST version 00:06:57.709 ************************************ 00:06:57.709 13:19:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.709 13:19:53 version -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 13:19:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:57.709 13:19:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:57.709 13:19:53 -- spdk/autotest.sh@194 -- # uname -s 00:06:57.709 13:19:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:57.709 13:19:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.709 13:19:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.709 13:19:53 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:57.709 13:19:53 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:57.709 13:19:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.709 13:19:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.709 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 ************************************ 00:06:57.709 START TEST blockdev_nvme 00:06:57.709 ************************************ 00:06:57.709 13:19:53 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:57.971 * Looking for test storage... 00:06:57.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.972 13:19:53 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.972 --rc genhtml_branch_coverage=1 00:06:57.972 --rc genhtml_function_coverage=1 00:06:57.972 --rc genhtml_legend=1 00:06:57.972 --rc geninfo_all_blocks=1 00:06:57.972 --rc geninfo_unexecuted_blocks=1 00:06:57.972 00:06:57.972 ' 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.972 --rc genhtml_branch_coverage=1 00:06:57.972 --rc genhtml_function_coverage=1 00:06:57.972 --rc genhtml_legend=1 00:06:57.972 --rc geninfo_all_blocks=1 00:06:57.972 --rc geninfo_unexecuted_blocks=1 00:06:57.972 00:06:57.972 ' 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.972 --rc genhtml_branch_coverage=1 00:06:57.972 --rc genhtml_function_coverage=1 00:06:57.972 --rc genhtml_legend=1 00:06:57.972 --rc geninfo_all_blocks=1 00:06:57.972 --rc geninfo_unexecuted_blocks=1 00:06:57.972 00:06:57.972 ' 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.972 --rc genhtml_branch_coverage=1 00:06:57.972 --rc genhtml_function_coverage=1 00:06:57.972 --rc genhtml_legend=1 00:06:57.972 --rc geninfo_all_blocks=1 00:06:57.972 --rc geninfo_unexecuted_blocks=1 00:06:57.972 00:06:57.972 ' 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:57.972 13:19:53 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71879 00:06:57.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71879 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71879 ']' 00:06:57.972 13:19:53 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.972 13:19:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.972 [2024-11-18 13:19:54.014587] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:57.972 [2024-11-18 13:19:54.015150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71879 ] 00:06:58.233 [2024-11-18 13:19:54.174280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.233 [2024-11-18 13:19:54.194247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.805 13:19:54 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.805 13:19:54 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:58.805 13:19:54 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:58.805 13:19:54 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:58.805 13:19:54 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:58.805 13:19:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:58.805 13:19:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:58.805 13:19:54 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:58.805 13:19:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.805 13:19:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.066 13:19:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.066 13:19:55 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:59.066 13:19:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.066 13:19:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.066 13:19:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.066 13:19:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:59.066 13:19:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:59.066 13:19:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.067 13:19:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.339 13:19:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.339 13:19:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.339 13:19:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:59.339 13:19:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:59.339 13:19:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.339 13:19:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.339 13:19:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:59.339 13:19:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:59.340 13:19:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0ffe5a70-cbbe-498d-a0b2-712e0f2b32fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0ffe5a70-cbbe-498d-a0b2-712e0f2b32fa",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "acbc4bf1-5cb2-4aae-9e83-e9d1743934a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "acbc4bf1-5cb2-4aae-9e83-e9d1743934a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b97d9627-41df-4e92-9627-97ebea6f00b8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b97d9627-41df-4e92-9627-97ebea6f00b8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "18c9270c-ba17-4a7b-9f91-31d3df5644e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "18c9270c-ba17-4a7b-9f91-31d3df5644e7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d7394422-b023-44c6-8338-7395ef9b3578"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d7394422-b023-44c6-8338-7395ef9b3578",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d16e108f-75c9-4f7c-8bcf-bc9ffeeb2624"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d16e108f-75c9-4f7c-8bcf-bc9ffeeb2624",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:59.340 13:19:55 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:59.340 13:19:55 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:59.340 13:19:55 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:59.340 13:19:55 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71879 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71879 ']' 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71879 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71879 00:06:59.340 killing process with pid 71879 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71879' 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71879 00:06:59.340 13:19:55 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71879 00:06:59.602 13:19:55 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:59.602 13:19:55 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.602 13:19:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:59.602 13:19:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.602 13:19:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.602 ************************************ 00:06:59.602 START TEST bdev_hello_world 00:06:59.602 ************************************ 00:06:59.602 13:19:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.602 [2024-11-18 13:19:55.668236] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:06:59.602 [2024-11-18 13:19:55.668498] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71941 ] 00:06:59.862 [2024-11-18 13:19:55.826598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.863 [2024-11-18 13:19:55.845888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.124 [2024-11-18 13:19:56.217364] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:00.124 [2024-11-18 13:19:56.217413] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:00.124 [2024-11-18 13:19:56.217448] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:00.124 [2024-11-18 13:19:56.219548] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:00.124 [2024-11-18 13:19:56.220402] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:00.124 [2024-11-18 13:19:56.220431] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:00.124 [2024-11-18 13:19:56.221065] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:00.124 00:07:00.124 [2024-11-18 13:19:56.221089] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:00.385 00:07:00.385 real 0m0.773s 00:07:00.385 user 0m0.503s 00:07:00.385 sys 0m0.167s 00:07:00.385 ************************************ 00:07:00.385 END TEST bdev_hello_world 00:07:00.385 ************************************ 00:07:00.385 13:19:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.385 13:19:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:00.385 13:19:56 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:00.385 13:19:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.385 13:19:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.385 13:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.385 ************************************ 00:07:00.385 START TEST bdev_bounds 00:07:00.385 ************************************ 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:00.385 Process bdevio pid: 71972 00:07:00.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71972 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71972' 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71972 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71972 ']' 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.385 13:19:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.385 [2024-11-18 13:19:56.503638] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:00.385 [2024-11-18 13:19:56.503756] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71972 ] 00:07:00.646 [2024-11-18 13:19:56.662987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.646 [2024-11-18 13:19:56.684804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.646 [2024-11-18 13:19:56.685406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.646 [2024-11-18 13:19:56.685478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.589 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.589 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:01.589 13:19:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:01.589 I/O targets: 00:07:01.589 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:01.589 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:01.589 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.589 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.589 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.589 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:01.589 00:07:01.589 00:07:01.589 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.589 http://cunit.sourceforge.net/ 00:07:01.589 00:07:01.589 00:07:01.589 Suite: bdevio tests on: Nvme3n1 00:07:01.589 Test: blockdev write read block ...passed 00:07:01.589 Test: blockdev write zeroes read block ...passed 00:07:01.589 Test: blockdev write zeroes read no split ...passed 00:07:01.589 Test: blockdev write zeroes read split ...passed 00:07:01.589 Test: blockdev write zeroes read split partial ...passed 00:07:01.589 Test: blockdev reset ...[2024-11-18 13:19:57.463069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:01.589 [2024-11-18 13:19:57.466920] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:01.589 passed 00:07:01.589 Test: blockdev write read 8 blocks ...passed 00:07:01.589 Test: blockdev write read size > 128k ...passed 00:07:01.589 Test: blockdev write read invalid size ...passed 00:07:01.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.589 Test: blockdev write read max offset ...passed 00:07:01.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.589 Test: blockdev writev readv 8 blocks ...passed 00:07:01.589 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.589 Test: blockdev writev readv block ...passed 00:07:01.589 Test: blockdev writev readv size > 128k ...passed 00:07:01.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.589 Test: blockdev comparev and writev ...[2024-11-18 13:19:57.482376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:01.589 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ca006000 len:0x1000 00:07:01.589 [2024-11-18 13:19:57.482508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.589 Test: blockdev nvme admin passthru ...[2024-11-18 13:19:57.484734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.589 [2024-11-18 13:19:57.484769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev copy ...passed 00:07:01.589 Suite: bdevio tests on: Nvme2n3 00:07:01.589 Test: blockdev write read block ...passed 00:07:01.589 Test: blockdev write zeroes read block ...passed 00:07:01.589 Test: blockdev write zeroes read no split ...passed 00:07:01.589 Test: blockdev write zeroes read split ...passed 00:07:01.589 Test: blockdev write zeroes read split partial ...passed 00:07:01.589 Test: blockdev reset ...[2024-11-18 13:19:57.513339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.589 [2024-11-18 13:19:57.516292] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:01.589 passed 00:07:01.589 Test: blockdev write read 8 blocks ...passed 00:07:01.589 Test: blockdev write read size > 128k ...passed 00:07:01.589 Test: blockdev write read invalid size ...passed 00:07:01.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.589 Test: blockdev write read max offset ...passed 00:07:01.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.589 Test: blockdev writev readv 8 blocks ...passed 00:07:01.589 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.589 Test: blockdev writev readv block ...passed 00:07:01.589 Test: blockdev writev readv size > 128k ...passed 00:07:01.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.589 Test: blockdev comparev and writev ...[2024-11-18 13:19:57.535189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6a05000 len:0x1000 00:07:01.589 [2024-11-18 13:19:57.535225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev nvme passthru rw ...passed 00:07:01.589 Test: blockdev nvme passthru vendor specific ...[2024-11-18 13:19:57.537012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.589 passed 00:07:01.589 Test: blockdev nvme admin passthru ...[2024-11-18 13:19:57.537124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev copy ...passed 00:07:01.589 Suite: bdevio tests on: Nvme2n2 00:07:01.589 Test: blockdev write read block ...passed 00:07:01.589 Test: blockdev write zeroes read block ...passed 00:07:01.589 Test: blockdev write zeroes read no split ...passed 00:07:01.589 Test: blockdev write zeroes read split ...passed 00:07:01.589 Test: blockdev write zeroes read split partial ...passed 00:07:01.589 Test: blockdev reset ...[2024-11-18 13:19:57.565397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.589 [2024-11-18 13:19:57.568704] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:01.589 passed 00:07:01.589 Test: blockdev write read 8 blocks ...passed 00:07:01.589 Test: blockdev write read size > 128k ...passed 00:07:01.589 Test: blockdev write read invalid size ...passed 00:07:01.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.589 Test: blockdev write read max offset ...passed 00:07:01.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.589 Test: blockdev writev readv 8 blocks ...passed 00:07:01.589 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.589 Test: blockdev writev readv block ...passed 00:07:01.589 Test: blockdev writev readv size > 128k ...passed 00:07:01.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.589 Test: blockdev comparev and writev ...[2024-11-18 13:19:57.586710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0436000 len:0x1000 00:07:01.589 passed 00:07:01.589 Test: blockdev nvme passthru rw ...[2024-11-18 13:19:57.586929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev nvme passthru vendor specific ...[2024-11-18 13:19:57.589589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.589 passed 00:07:01.589 Test: blockdev nvme admin passthru ...[2024-11-18 13:19:57.589780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev copy ...passed 00:07:01.589 Suite: bdevio tests on: Nvme2n1 00:07:01.589 Test: blockdev write read block ...passed 00:07:01.589 Test: blockdev write zeroes read block ...passed 00:07:01.589 Test: blockdev write zeroes read no split ...passed 00:07:01.589 Test: blockdev write zeroes read split ...passed 00:07:01.589 Test: blockdev write zeroes read split partial ...passed 00:07:01.589 Test: blockdev reset ...[2024-11-18 13:19:57.614451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.589 [2024-11-18 13:19:57.620679] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:01.589 passed 00:07:01.589 Test: blockdev write read 8 blocks ...passed 00:07:01.589 Test: blockdev write read size > 128k ...passed 00:07:01.589 Test: blockdev write read invalid size ...passed 00:07:01.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.589 Test: blockdev write read max offset ...passed 00:07:01.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.589 Test: blockdev writev readv 8 blocks ...passed 00:07:01.589 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.589 Test: blockdev writev readv block ...passed 00:07:01.589 Test: blockdev writev readv size > 128k ...passed 00:07:01.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.589 Test: blockdev comparev and writev ...[2024-11-18 13:19:57.636242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0430000 len:0x1000 00:07:01.589 [2024-11-18 13:19:57.636279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev nvme passthru rw ...passed 00:07:01.589 Test: blockdev nvme passthru vendor specific ...[2024-11-18 13:19:57.639099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:01.589 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:01.589 [2024-11-18 13:19:57.639205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.589 passed 00:07:01.589 Test: blockdev copy ...passed 00:07:01.589 Suite: bdevio tests on: Nvme1n1 00:07:01.589 Test: blockdev write read block ...passed 00:07:01.589 Test: blockdev write zeroes read block ...passed 00:07:01.589 Test: blockdev write zeroes read no split ...passed 00:07:01.589 Test: blockdev write zeroes read split ...passed 00:07:01.589 Test: blockdev write zeroes read split partial ...passed 00:07:01.590 Test: blockdev reset ...[2024-11-18 13:19:57.665818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:01.590 [2024-11-18 13:19:57.668502] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:01.590 Test: blockdev write read 8 blocks ...uccessful. 00:07:01.590 passed 00:07:01.590 Test: blockdev write read size > 128k ...passed 00:07:01.590 Test: blockdev write read invalid size ...passed 00:07:01.590 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.590 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.590 Test: blockdev write read max offset ...passed 00:07:01.590 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.590 Test: blockdev writev readv 8 blocks ...passed 00:07:01.590 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.590 Test: blockdev writev readv block ...passed 00:07:01.590 Test: blockdev writev readv size > 128k ...passed 00:07:01.590 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.590 Test: blockdev comparev and writev ...[2024-11-18 13:19:57.685598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e042c000 len:0x1000 00:07:01.590 [2024-11-18 13:19:57.685637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.590 passed 00:07:01.590 Test: blockdev nvme passthru rw ...passed 00:07:01.590 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.590 Test: blockdev nvme admin passthru ...[2024-11-18 13:19:57.687965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.590 [2024-11-18 13:19:57.687996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.590 passed 00:07:01.590 Test: blockdev copy ...passed 00:07:01.590 Suite: bdevio tests on: Nvme0n1 00:07:01.590 Test: blockdev write read block ...passed 00:07:01.590 Test: blockdev write zeroes read block ...passed 00:07:01.590 Test: blockdev write zeroes read no split ...passed 00:07:01.590 Test: blockdev write zeroes read split ...passed 00:07:01.850 Test: blockdev write zeroes read split partial ...passed 00:07:01.850 Test: blockdev reset ...[2024-11-18 13:19:57.718099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:01.850 passed 00:07:01.850 Test: blockdev write read 8 blocks ...[2024-11-18 13:19:57.719903] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:01.850 passed 00:07:01.850 Test: blockdev write read size > 128k ...passed 00:07:01.850 Test: blockdev write read invalid size ...passed 00:07:01.850 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.850 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.850 Test: blockdev write read max offset ...passed 00:07:01.850 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.850 Test: blockdev writev readv 8 blocks ...passed 00:07:01.850 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.850 Test: blockdev writev readv block ...passed 00:07:01.850 Test: blockdev writev readv size > 128k ...passed 00:07:01.850 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.850 Test: blockdev comparev and writev ...passed 00:07:01.850 Test: blockdev nvme passthru rw ...[2024-11-18 13:19:57.733270] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.850 separate metadata which is not supported yet. 00:07:01.850 passed 00:07:01.850 Test: blockdev nvme passthru vendor specific ...[2024-11-18 13:19:57.734622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:01.850 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:01.850 [2024-11-18 13:19:57.734716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.850 passed 00:07:01.850 Test: blockdev copy ...passed 00:07:01.850 00:07:01.850 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.850 suites 6 6 n/a 0 0 00:07:01.850 tests 138 138 138 0 0 00:07:01.850 asserts 893 893 893 0 n/a 00:07:01.850 00:07:01.850 Elapsed time = 0.650 seconds 00:07:01.850 0 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71972 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71972 ']' 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71972 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71972 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71972' 00:07:01.850 killing process with pid 71972 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71972 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71972 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:01.850 00:07:01.850 real 0m1.477s 00:07:01.850 user 0m3.698s 00:07:01.850 sys 0m0.269s 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.850 ************************************ 00:07:01.850 END TEST bdev_bounds 00:07:01.850 ************************************ 00:07:01.850 13:19:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.850 13:19:57 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.850 13:19:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.850 13:19:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.850 13:19:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.112 ************************************ 00:07:02.112 START TEST bdev_nbd 00:07:02.112 ************************************ 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72025 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72025 /var/tmp/spdk-nbd.sock 00:07:02.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72025 ']' 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.112 13:19:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.112 [2024-11-18 13:19:58.050883] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:02.112 [2024-11-18 13:19:58.051107] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.112 [2024-11-18 13:19:58.210213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.112 [2024-11-18 13:19:58.229610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.052 13:19:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.052 1+0 records in 00:07:03.052 1+0 records out 00:07:03.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00172269 s, 2.4 MB/s 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.052 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.318 1+0 records in 00:07:03.318 1+0 records out 00:07:03.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106278 s, 3.9 MB/s 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.318 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.588 1+0 records in 00:07:03.588 1+0 records out 00:07:03.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121503 s, 3.4 MB/s 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.588 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.849 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.850 1+0 records in 00:07:03.850 1+0 records out 00:07:03.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840461 s, 4.9 MB/s 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.850 13:19:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.111 1+0 records in 00:07:04.111 1+0 records out 00:07:04.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111209 s, 3.7 MB/s 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.111 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.371 1+0 records in 00:07:04.371 1+0 records out 00:07:04.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000963298 s, 4.3 MB/s 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.371 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd0", 00:07:04.633 "bdev_name": "Nvme0n1" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd1", 00:07:04.633 "bdev_name": "Nvme1n1" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd2", 00:07:04.633 "bdev_name": "Nvme2n1" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd3", 00:07:04.633 "bdev_name": "Nvme2n2" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd4", 00:07:04.633 "bdev_name": "Nvme2n3" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd5", 00:07:04.633 "bdev_name": "Nvme3n1" 00:07:04.633 } 00:07:04.633 ]' 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd0", 00:07:04.633 "bdev_name": "Nvme0n1" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd1", 00:07:04.633 "bdev_name": "Nvme1n1" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd2", 00:07:04.633 "bdev_name": "Nvme2n1" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd3", 00:07:04.633 "bdev_name": "Nvme2n2" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd4", 00:07:04.633 "bdev_name": "Nvme2n3" 00:07:04.633 }, 00:07:04.633 { 00:07:04.633 "nbd_device": "/dev/nbd5", 00:07:04.633 "bdev_name": "Nvme3n1" 00:07:04.633 } 00:07:04.633 ]' 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.633 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.893 13:20:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.893 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.154 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.416 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.675 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.936 13:20:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.197 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:06.457 /dev/nbd0 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.457 1+0 records in 00:07:06.457 1+0 records out 00:07:06.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00158243 s, 2.6 MB/s 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.457 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:06.719 /dev/nbd1 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.719 1+0 records in 00:07:06.719 1+0 records out 00:07:06.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130953 s, 3.1 MB/s 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.719 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:06.980 /dev/nbd10 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.980 1+0 records in 00:07:06.980 1+0 records out 00:07:06.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129934 s, 3.2 MB/s 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.980 13:20:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:07.241 /dev/nbd11 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.241 1+0 records in 00:07:07.241 1+0 records out 00:07:07.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958028 s, 4.3 MB/s 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.241 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:07.241 /dev/nbd12 00:07:07.544 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:07.544 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:07.544 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:07.544 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.544 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.544 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.545 1+0 records in 00:07:07.545 1+0 records out 00:07:07.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00171891 s, 2.4 MB/s 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:07.545 /dev/nbd13 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.545 1+0 records in 00:07:07.545 1+0 records out 00:07:07.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00215822 s, 1.9 MB/s 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.545 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.828 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd0", 00:07:07.828 "bdev_name": "Nvme0n1" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd1", 00:07:07.828 "bdev_name": "Nvme1n1" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd10", 00:07:07.828 "bdev_name": "Nvme2n1" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd11", 00:07:07.828 "bdev_name": "Nvme2n2" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd12", 00:07:07.828 "bdev_name": "Nvme2n3" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd13", 00:07:07.828 "bdev_name": "Nvme3n1" 00:07:07.828 } 00:07:07.828 ]' 00:07:07.828 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.828 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd0", 00:07:07.828 "bdev_name": "Nvme0n1" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd1", 00:07:07.828 "bdev_name": "Nvme1n1" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd10", 00:07:07.828 "bdev_name": "Nvme2n1" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd11", 00:07:07.828 "bdev_name": "Nvme2n2" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd12", 00:07:07.828 "bdev_name": "Nvme2n3" 00:07:07.828 }, 00:07:07.828 { 00:07:07.828 "nbd_device": "/dev/nbd13", 00:07:07.828 "bdev_name": "Nvme3n1" 00:07:07.828 } 00:07:07.828 ]' 00:07:07.828 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.828 /dev/nbd1 00:07:07.828 /dev/nbd10 00:07:07.828 /dev/nbd11 00:07:07.828 /dev/nbd12 00:07:07.828 /dev/nbd13' 00:07:07.828 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.828 /dev/nbd1 00:07:07.828 /dev/nbd10 00:07:07.828 /dev/nbd11 00:07:07.828 /dev/nbd12 00:07:07.828 /dev/nbd13' 00:07:07.828 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.828 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:07.829 256+0 records in 00:07:07.829 256+0 records out 00:07:07.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600184 s, 175 MB/s 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.829 13:20:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.090 256+0 records in 00:07:08.090 256+0 records out 00:07:08.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.264639 s, 4.0 MB/s 00:07:08.090 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.090 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.351 256+0 records in 00:07:08.351 256+0 records out 00:07:08.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215652 s, 4.9 MB/s 00:07:08.351 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.351 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:08.611 256+0 records in 00:07:08.611 256+0 records out 00:07:08.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242945 s, 4.3 MB/s 00:07:08.611 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.611 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:08.871 256+0 records in 00:07:08.871 256+0 records out 00:07:08.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.262238 s, 4.0 MB/s 00:07:08.871 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.871 13:20:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:09.131 256+0 records in 00:07:09.131 256+0 records out 00:07:09.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244591 s, 4.3 MB/s 00:07:09.131 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.131 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:09.392 256+0 records in 00:07:09.392 256+0 records out 00:07:09.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.26669 s, 3.9 MB/s 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.392 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.652 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.912 13:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.172 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:10.432 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.692 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.951 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.951 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.951 13:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.951 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:10.952 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:11.212 malloc_lvol_verify 00:07:11.212 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:11.472 9572711e-f811-42cb-b56d-7b859a36bdee 00:07:11.472 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:11.753 3af14045-05f4-4238-9461-f0b0c3f55b6a 00:07:11.753 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:11.753 /dev/nbd0 00:07:11.753 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:11.753 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:11.753 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:11.753 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:11.753 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:11.753 mke2fs 1.47.0 (5-Feb-2023) 00:07:11.753 Discarding device blocks: 0/4096 done 00:07:11.753 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:11.753 00:07:11.753 Allocating group tables: 0/1 done 00:07:12.013 Writing inode tables: 0/1 done 00:07:12.013 Creating journal (1024 blocks): done 00:07:12.013 Writing superblocks and filesystem accounting information: 0/1 done 00:07:12.013 00:07:12.013 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.013 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.013 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:12.013 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.013 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.013 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.013 13:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72025 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72025 ']' 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72025 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72025 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.013 killing process with pid 72025 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72025' 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72025 00:07:12.013 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72025 00:07:12.274 13:20:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:12.274 00:07:12.274 real 0m10.319s 00:07:12.274 user 0m14.290s 00:07:12.274 sys 0m3.459s 00:07:12.274 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.274 13:20:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:12.274 ************************************ 00:07:12.274 END TEST bdev_nbd 00:07:12.274 ************************************ 00:07:12.274 skipping fio tests on NVMe due to multi-ns failures. 00:07:12.274 13:20:08 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:12.274 13:20:08 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:12.274 13:20:08 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:12.274 13:20:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:12.274 13:20:08 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:12.274 13:20:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:12.274 13:20:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.274 13:20:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.274 ************************************ 00:07:12.274 START TEST bdev_verify 00:07:12.274 ************************************ 00:07:12.274 13:20:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:12.534 [2024-11-18 13:20:08.424016] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:12.534 [2024-11-18 13:20:08.424142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72399 ] 00:07:12.534 [2024-11-18 13:20:08.579653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.534 [2024-11-18 13:20:08.601005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.534 [2024-11-18 13:20:08.601043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.104 Running I/O for 5 seconds... 00:07:15.427 17984.00 IOPS, 70.25 MiB/s [2024-11-18T13:20:12.498Z] 18432.00 IOPS, 72.00 MiB/s [2024-11-18T13:20:13.437Z] 18837.33 IOPS, 73.58 MiB/s [2024-11-18T13:20:14.377Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-18T13:20:14.377Z] 19379.20 IOPS, 75.70 MiB/s 00:07:18.249 Latency(us) 00:07:18.249 [2024-11-18T13:20:14.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.249 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x0 length 0xbd0bd 00:07:18.249 Nvme0n1 : 5.05 1623.26 6.34 0.00 0.00 78499.28 15627.82 84692.68 00:07:18.249 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:18.249 Nvme0n1 : 5.08 1561.60 6.10 0.00 0.00 81776.54 16232.76 137928.07 00:07:18.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x0 length 0xa0000 00:07:18.249 Nvme1n1 : 5.08 1626.28 6.35 0.00 0.00 78204.11 8620.50 74206.92 00:07:18.249 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0xa0000 length 0xa0000 00:07:18.249 Nvme1n1 : 5.08 1560.93 6.10 0.00 0.00 81723.60 18249.26 136314.88 00:07:18.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x0 length 0x80000 00:07:18.249 Nvme2n1 : 5.08 1625.81 6.35 0.00 0.00 77990.91 8217.21 69770.63 00:07:18.249 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x80000 length 0x80000 00:07:18.249 Nvme2n1 : 5.09 1560.45 6.10 0.00 0.00 81399.56 18148.43 136314.88 00:07:18.249 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x0 length 0x80000 00:07:18.249 Nvme2n2 : 5.09 1634.70 6.39 0.00 0.00 77596.22 8771.74 68560.74 00:07:18.249 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x80000 length 0x80000 00:07:18.249 Nvme2n2 : 5.09 1559.88 6.09 0.00 0.00 81216.87 17140.18 133088.49 00:07:18.249 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x0 length 0x80000 00:07:18.249 Nvme2n3 : 5.09 1633.45 6.38 0.00 0.00 77488.33 11494.01 69367.34 00:07:18.249 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x80000 length 0x80000 00:07:18.249 Nvme2n3 : 5.09 1559.37 6.09 0.00 0.00 81078.40 17039.36 135508.28 00:07:18.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x0 length 0x20000 00:07:18.249 Nvme3n1 : 5.10 1632.32 6.38 0.00 0.00 77381.37 13409.67 76223.41 00:07:18.249 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.249 Verification LBA range: start 0x20000 length 0x20000 00:07:18.249 Nvme3n1 : 5.09 1558.15 6.09 0.00 0.00 80975.53 15930.29 138734.67 00:07:18.249 [2024-11-18T13:20:14.377Z] =================================================================================================================== 00:07:18.249 [2024-11-18T13:20:14.377Z] Total : 19136.19 74.75 0.00 0.00 79572.88 8217.21 138734.67 00:07:18.819 00:07:18.819 real 0m6.364s 00:07:18.819 user 0m12.015s 00:07:18.819 sys 0m0.208s 00:07:18.819 13:20:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.820 ************************************ 00:07:18.820 END TEST bdev_verify 00:07:18.820 13:20:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:18.820 ************************************ 00:07:18.820 13:20:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:18.820 13:20:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:18.820 13:20:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.820 13:20:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.820 ************************************ 00:07:18.820 START TEST bdev_verify_big_io 00:07:18.820 ************************************ 00:07:18.820 13:20:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:18.820 [2024-11-18 13:20:14.848297] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:18.820 [2024-11-18 13:20:14.848417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:07:19.080 [2024-11-18 13:20:14.999041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.080 [2024-11-18 13:20:15.019159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.080 [2024-11-18 13:20:15.019163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.340 Running I/O for 5 seconds... 00:07:24.579 699.00 IOPS, 43.69 MiB/s [2024-11-18T13:20:21.648Z] 1989.00 IOPS, 124.31 MiB/s [2024-11-18T13:20:21.648Z] 2600.33 IOPS, 162.52 MiB/s 00:07:25.520 Latency(us) 00:07:25.520 [2024-11-18T13:20:21.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.520 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x0 length 0xbd0b 00:07:25.520 Nvme0n1 : 5.70 112.24 7.01 0.00 0.00 1091879.70 17946.78 1109877.37 00:07:25.520 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:25.520 Nvme0n1 : 5.85 101.38 6.34 0.00 0.00 1183412.35 32868.82 1716438.25 00:07:25.520 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x0 length 0xa000 00:07:25.520 Nvme1n1 : 5.85 113.46 7.09 0.00 0.00 1042541.10 69367.34 955010.76 00:07:25.520 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0xa000 length 0xa000 00:07:25.520 Nvme1n1 : 5.94 110.67 6.92 0.00 0.00 1057770.84 52428.80 1167952.34 00:07:25.520 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x0 length 0x8000 00:07:25.520 Nvme2n1 : 5.90 119.27 7.45 0.00 0.00 976234.77 45774.38 974369.08 00:07:25.520 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x8000 length 0x8000 00:07:25.520 Nvme2n1 : 5.99 110.95 6.93 0.00 0.00 1034549.10 87112.47 1793871.56 00:07:25.520 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x0 length 0x8000 00:07:25.520 Nvme2n2 : 6.00 123.93 7.75 0.00 0.00 909421.77 36901.81 1013085.74 00:07:25.520 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x8000 length 0x8000 00:07:25.520 Nvme2n2 : 6.01 115.37 7.21 0.00 0.00 967907.00 52428.80 1832588.21 00:07:25.520 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x0 length 0x8000 00:07:25.520 Nvme2n3 : 6.01 123.70 7.73 0.00 0.00 877837.59 38111.70 1045349.61 00:07:25.520 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x8000 length 0x8000 00:07:25.520 Nvme2n3 : 6.01 118.61 7.41 0.00 0.00 909748.78 12905.55 1871304.86 00:07:25.520 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x0 length 0x2000 00:07:25.520 Nvme3n1 : 6.02 137.43 8.59 0.00 0.00 773772.03 1405.24 1226027.32 00:07:25.520 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:25.520 Verification LBA range: start 0x2000 length 0x2000 00:07:25.520 Nvme3n1 : 6.05 148.37 9.27 0.00 0.00 706432.09 1102.77 1393799.48 00:07:25.520 [2024-11-18T13:20:21.648Z] =================================================================================================================== 00:07:25.520 [2024-11-18T13:20:21.648Z] Total : 1435.39 89.71 0.00 0.00 946675.18 1102.77 1871304.86 00:07:26.462 00:07:26.462 real 0m7.430s 00:07:26.462 user 0m14.151s 00:07:26.462 sys 0m0.207s 00:07:26.462 13:20:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.462 ************************************ 00:07:26.462 END TEST bdev_verify_big_io 00:07:26.462 ************************************ 00:07:26.462 13:20:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:26.462 13:20:22 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.462 13:20:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.462 13:20:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.462 13:20:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.462 ************************************ 00:07:26.462 START TEST bdev_write_zeroes 00:07:26.462 ************************************ 00:07:26.462 13:20:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.462 [2024-11-18 13:20:22.343767] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:26.462 [2024-11-18 13:20:22.343884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72591 ] 00:07:26.462 [2024-11-18 13:20:22.502283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.462 [2024-11-18 13:20:22.521366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.036 Running I/O for 1 seconds... 00:07:27.977 55296.00 IOPS, 216.00 MiB/s 00:07:27.977 Latency(us) 00:07:27.977 [2024-11-18T13:20:24.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.977 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.977 Nvme0n1 : 1.02 9186.27 35.88 0.00 0.00 13869.18 5696.59 26416.05 00:07:27.977 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.977 Nvme1n1 : 1.03 9173.24 35.83 0.00 0.00 13878.02 9830.40 25508.63 00:07:27.977 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.977 Nvme2n1 : 1.03 9159.76 35.78 0.00 0.00 13819.94 9427.10 22080.59 00:07:27.977 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.977 Nvme2n2 : 1.03 9143.84 35.72 0.00 0.00 13795.94 9729.58 22080.59 00:07:27.977 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.977 Nvme2n3 : 1.03 9182.85 35.87 0.00 0.00 13752.59 9225.45 22181.42 00:07:27.977 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.977 Nvme3n1 : 1.03 9170.32 35.82 0.00 0.00 13732.33 8620.50 22685.54 00:07:27.977 [2024-11-18T13:20:24.105Z] =================================================================================================================== 00:07:27.977 [2024-11-18T13:20:24.105Z] Total : 55016.29 214.91 0.00 0.00 13807.85 5696.59 26416.05 00:07:28.236 00:07:28.236 real 0m1.843s 00:07:28.236 user 0m1.578s 00:07:28.236 sys 0m0.151s 00:07:28.236 13:20:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.236 ************************************ 00:07:28.236 13:20:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:28.236 END TEST bdev_write_zeroes 00:07:28.236 ************************************ 00:07:28.236 13:20:24 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.236 13:20:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:28.236 13:20:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.236 13:20:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.236 ************************************ 00:07:28.236 START TEST bdev_json_nonenclosed 00:07:28.236 ************************************ 00:07:28.236 13:20:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.236 [2024-11-18 13:20:24.241541] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:28.236 [2024-11-18 13:20:24.241669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72633 ] 00:07:28.496 [2024-11-18 13:20:24.392079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.496 [2024-11-18 13:20:24.412469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.496 [2024-11-18 13:20:24.412558] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:28.496 [2024-11-18 13:20:24.412574] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:28.496 [2024-11-18 13:20:24.412585] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.496 00:07:28.496 real 0m0.299s 00:07:28.496 user 0m0.114s 00:07:28.496 sys 0m0.082s 00:07:28.496 ************************************ 00:07:28.496 END TEST bdev_json_nonenclosed 00:07:28.496 13:20:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.496 13:20:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:28.496 ************************************ 00:07:28.496 13:20:24 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.496 13:20:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:28.496 13:20:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.496 13:20:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.496 ************************************ 00:07:28.496 START TEST bdev_json_nonarray 00:07:28.496 ************************************ 00:07:28.496 13:20:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.496 [2024-11-18 13:20:24.597070] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:28.496 [2024-11-18 13:20:24.597209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72653 ] 00:07:28.756 [2024-11-18 13:20:24.748134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.756 [2024-11-18 13:20:24.768529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.756 [2024-11-18 13:20:24.768631] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:28.756 [2024-11-18 13:20:24.768650] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:28.756 [2024-11-18 13:20:24.768661] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.756 00:07:28.756 real 0m0.301s 00:07:28.756 user 0m0.105s 00:07:28.756 sys 0m0.093s 00:07:28.756 13:20:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.756 ************************************ 00:07:28.756 END TEST bdev_json_nonarray 00:07:28.756 ************************************ 00:07:28.756 13:20:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:29.017 13:20:24 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:29.017 00:07:29.017 real 0m31.108s 00:07:29.017 user 0m48.402s 00:07:29.017 sys 0m5.325s 00:07:29.017 13:20:24 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.017 13:20:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.017 ************************************ 00:07:29.017 END TEST blockdev_nvme 00:07:29.017 ************************************ 00:07:29.017 13:20:24 -- spdk/autotest.sh@209 -- # uname -s 00:07:29.017 13:20:24 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:29.017 13:20:24 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:29.017 13:20:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.017 13:20:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.017 13:20:24 -- common/autotest_common.sh@10 -- # set +x 00:07:29.017 ************************************ 00:07:29.017 START TEST blockdev_nvme_gpt 00:07:29.017 ************************************ 00:07:29.017 13:20:24 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:29.017 * Looking for test storage... 00:07:29.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.017 13:20:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.017 --rc genhtml_branch_coverage=1 00:07:29.017 --rc genhtml_function_coverage=1 00:07:29.017 --rc genhtml_legend=1 00:07:29.017 --rc geninfo_all_blocks=1 00:07:29.017 --rc geninfo_unexecuted_blocks=1 00:07:29.017 00:07:29.017 ' 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.017 --rc genhtml_branch_coverage=1 00:07:29.017 --rc genhtml_function_coverage=1 00:07:29.017 --rc genhtml_legend=1 00:07:29.017 --rc geninfo_all_blocks=1 00:07:29.017 --rc geninfo_unexecuted_blocks=1 00:07:29.017 00:07:29.017 ' 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.017 --rc genhtml_branch_coverage=1 00:07:29.017 --rc genhtml_function_coverage=1 00:07:29.017 --rc genhtml_legend=1 00:07:29.017 --rc geninfo_all_blocks=1 00:07:29.017 --rc geninfo_unexecuted_blocks=1 00:07:29.017 00:07:29.017 ' 00:07:29.017 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.018 --rc genhtml_branch_coverage=1 00:07:29.018 --rc genhtml_function_coverage=1 00:07:29.018 --rc genhtml_legend=1 00:07:29.018 --rc geninfo_all_blocks=1 00:07:29.018 --rc geninfo_unexecuted_blocks=1 00:07:29.018 00:07:29.018 ' 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72727 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72727 00:07:29.018 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72727 ']' 00:07:29.018 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.018 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.018 13:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:29.018 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.018 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.018 13:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.278 [2024-11-18 13:20:25.196235] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:29.278 [2024-11-18 13:20:25.196359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72727 ] 00:07:29.278 [2024-11-18 13:20:25.348162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.279 [2024-11-18 13:20:25.369785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.217 13:20:26 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.217 13:20:26 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:30.217 13:20:26 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:30.217 13:20:26 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:30.217 13:20:26 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:30.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.477 Waiting for block devices as requested 00:07:30.477 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.477 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.737 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.737 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.021 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:36.021 BYT; 00:07:36.021 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:36.021 BYT; 00:07:36.021 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:36.021 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:36.021 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:36.022 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.022 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:36.022 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:36.022 13:20:31 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:36.022 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:36.022 13:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:36.955 The operation has completed successfully. 00:07:36.955 13:20:32 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:37.889 The operation has completed successfully. 00:07:37.889 13:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:38.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.713 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.713 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.713 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.713 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.713 13:20:34 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:38.713 13:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.713 13:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.713 [] 00:07:38.714 13:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.714 13:20:34 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:38.714 13:20:34 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:38.714 13:20:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:38.714 13:20:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:39.017 13:20:34 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:39.017 13:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.017 13:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.302 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.302 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:39.302 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5b11b32b-97ba-4d18-9466-e5e5c39baedd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5b11b32b-97ba-4d18-9466-e5e5c39baedd",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5d53b8d1-7bee-4fc5-b27c-3044b752ef7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5d53b8d1-7bee-4fc5-b27c-3044b752ef7d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "95c826d4-5879-4736-b6e9-e6ae9fd0f8f0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95c826d4-5879-4736-b6e9-e6ae9fd0f8f0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "28f655b6-e35e-4543-9e31-501ab7477c91"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "28f655b6-e35e-4543-9e31-501ab7477c91",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cf9ba443-2fce-4978-88df-ed36f96fe3f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cf9ba443-2fce-4978-88df-ed36f96fe3f8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:39.303 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72727 00:07:39.303 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72727 ']' 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72727 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72727 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.304 killing process with pid 72727 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72727' 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72727 00:07:39.304 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72727 00:07:39.562 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:39.562 13:20:35 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:39.562 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:39.562 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.562 13:20:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.562 ************************************ 00:07:39.562 START TEST bdev_hello_world 00:07:39.562 ************************************ 00:07:39.562 13:20:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:39.562 [2024-11-18 13:20:35.633850] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:39.562 [2024-11-18 13:20:35.633958] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73345 ] 00:07:39.820 [2024-11-18 13:20:35.788589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.820 [2024-11-18 13:20:35.805628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.079 [2024-11-18 13:20:36.162405] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:40.079 [2024-11-18 13:20:36.162453] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:40.079 [2024-11-18 13:20:36.162471] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:40.079 [2024-11-18 13:20:36.164535] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:40.079 [2024-11-18 13:20:36.165055] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:40.079 [2024-11-18 13:20:36.165099] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:40.079 [2024-11-18 13:20:36.165425] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:40.079 00:07:40.079 [2024-11-18 13:20:36.165464] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:40.337 00:07:40.337 real 0m0.723s 00:07:40.337 user 0m0.479s 00:07:40.337 sys 0m0.140s 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:40.337 ************************************ 00:07:40.337 END TEST bdev_hello_world 00:07:40.337 ************************************ 00:07:40.337 13:20:36 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:40.337 13:20:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.337 13:20:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.337 13:20:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.337 ************************************ 00:07:40.337 START TEST bdev_bounds 00:07:40.337 ************************************ 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73366 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:40.337 Process bdevio pid: 73366 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73366' 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73366 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73366 ']' 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.337 13:20:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:40.337 [2024-11-18 13:20:36.400795] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:40.337 [2024-11-18 13:20:36.400902] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73366 ] 00:07:40.596 [2024-11-18 13:20:36.558743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.596 [2024-11-18 13:20:36.579351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.596 [2024-11-18 13:20:36.579500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.596 [2024-11-18 13:20:36.579598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.162 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.162 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:41.162 13:20:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:41.421 I/O targets: 00:07:41.421 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:41.421 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:41.421 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:41.421 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:41.421 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:41.421 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:41.421 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:41.421 00:07:41.421 00:07:41.421 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.421 http://cunit.sourceforge.net/ 00:07:41.421 00:07:41.421 00:07:41.421 Suite: bdevio tests on: Nvme3n1 00:07:41.421 Test: blockdev write read block ...passed 00:07:41.421 Test: blockdev write zeroes read block ...passed 00:07:41.421 Test: blockdev write zeroes read no split ...passed 00:07:41.421 Test: blockdev write zeroes read split ...passed 00:07:41.421 Test: blockdev write zeroes read split partial ...passed 00:07:41.421 Test: blockdev reset ...[2024-11-18 13:20:37.369409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:41.421 [2024-11-18 13:20:37.371153] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:41.421 passed 00:07:41.421 Test: blockdev write read 8 blocks ...passed 00:07:41.421 Test: blockdev write read size > 128k ...passed 00:07:41.421 Test: blockdev write read invalid size ...passed 00:07:41.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:41.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:41.421 Test: blockdev write read max offset ...passed 00:07:41.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:41.421 Test: blockdev writev readv 8 blocks ...passed 00:07:41.421 Test: blockdev writev readv 30 x 1block ...passed 00:07:41.421 Test: blockdev writev readv block ...passed 00:07:41.421 Test: blockdev writev readv size > 128k ...passed 00:07:41.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:41.421 Test: blockdev comparev and writev ...passed 00:07:41.421 Test: blockdev nvme passthru rw ...passed 00:07:41.421 Test: blockdev nvme passthru vendor specific ...passed 00:07:41.421 Test: blockdev nvme admin passthru ...[2024-11-18 13:20:37.376281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c400e000 len:0x1000 00:07:41.421 [2024-11-18 13:20:37.376321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:41.421 [2024-11-18 13:20:37.376746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:41.421 [2024-11-18 13:20:37.376772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:41.421 passed 00:07:41.421 Test: blockdev copy ...passed 00:07:41.421 Suite: bdevio tests on: Nvme2n3 00:07:41.421 Test: blockdev write read block ...passed 00:07:41.421 Test: blockdev write zeroes read block ...passed 00:07:41.421 Test: blockdev write zeroes read no split ...passed 00:07:41.421 Test: blockdev write zeroes read split ...passed 00:07:41.421 Test: blockdev write zeroes read split partial ...passed 00:07:41.421 Test: blockdev reset ...[2024-11-18 13:20:37.419914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:41.421 [2024-11-18 13:20:37.421827] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:41.421 passed 00:07:41.421 Test: blockdev write read 8 blocks ...passed 00:07:41.421 Test: blockdev write read size > 128k ...passed 00:07:41.421 Test: blockdev write read invalid size ...passed 00:07:41.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:41.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:41.421 Test: blockdev write read max offset ...passed 00:07:41.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:41.421 Test: blockdev writev readv 8 blocks ...passed 00:07:41.421 Test: blockdev writev readv 30 x 1block ...passed 00:07:41.421 Test: blockdev writev readv block ...passed 00:07:41.421 Test: blockdev writev readv size > 128k ...passed 00:07:41.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:41.421 Test: blockdev comparev and writev ...[2024-11-18 13:20:37.426949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c400a000 len:0x1000 00:07:41.421 passed 00:07:41.421 Test: blockdev nvme passthru rw ...[2024-11-18 13:20:37.426992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:41.421 passed 00:07:41.421 Test: blockdev nvme passthru vendor specific ...passed 00:07:41.421 Test: blockdev nvme admin passthru ...[2024-11-18 13:20:37.427714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:41.421 [2024-11-18 13:20:37.427740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:41.421 passed 00:07:41.421 Test: blockdev copy ...passed 00:07:41.421 Suite: bdevio tests on: Nvme2n2 00:07:41.421 Test: blockdev write read block ...passed 00:07:41.421 Test: blockdev write zeroes read block ...passed 00:07:41.421 Test: blockdev write zeroes read no split ...passed 00:07:41.421 Test: blockdev write zeroes read split ...passed 00:07:41.421 Test: blockdev write zeroes read split partial ...passed 00:07:41.421 Test: blockdev reset ...[2024-11-18 13:20:37.465416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:41.421 [2024-11-18 13:20:37.467313] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:41.421 passed 00:07:41.421 Test: blockdev write read 8 blocks ...passed 00:07:41.421 Test: blockdev write read size > 128k ...passed 00:07:41.421 Test: blockdev write read invalid size ...passed 00:07:41.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:41.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:41.421 Test: blockdev write read max offset ...passed 00:07:41.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:41.421 Test: blockdev writev readv 8 blocks ...passed 00:07:41.421 Test: blockdev writev readv 30 x 1block ...passed 00:07:41.421 Test: blockdev writev readv block ...passed 00:07:41.421 Test: blockdev writev readv size > 128k ...passed 00:07:41.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:41.421 Test: blockdev comparev and writev ...[2024-11-18 13:20:37.472413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb405000 len:0x1000 00:07:41.421 [2024-11-18 13:20:37.472453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:41.421 passed 00:07:41.421 Test: blockdev nvme passthru rw ...passed 00:07:41.421 Test: blockdev nvme passthru vendor specific ...passed 00:07:41.421 Test: blockdev nvme admin passthru ...[2024-11-18 13:20:37.472975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:41.421 [2024-11-18 13:20:37.472999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:41.421 passed 00:07:41.421 Test: blockdev copy ...passed 00:07:41.421 Suite: bdevio tests on: Nvme2n1 00:07:41.421 Test: blockdev write read block ...passed 00:07:41.421 Test: blockdev write zeroes read block ...passed 00:07:41.421 Test: blockdev write zeroes read no split ...passed 00:07:41.421 Test: blockdev write zeroes read split ...passed 00:07:41.421 Test: blockdev write zeroes read split partial ...passed 00:07:41.421 Test: blockdev reset ...[2024-11-18 13:20:37.514697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:41.421 [2024-11-18 13:20:37.516439] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:41.421 passed 00:07:41.421 Test: blockdev write read 8 blocks ...passed 00:07:41.421 Test: blockdev write read size > 128k ...passed 00:07:41.421 Test: blockdev write read invalid size ...passed 00:07:41.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:41.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:41.421 Test: blockdev write read max offset ...passed 00:07:41.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:41.421 Test: blockdev writev readv 8 blocks ...passed 00:07:41.421 Test: blockdev writev readv 30 x 1block ...passed 00:07:41.421 Test: blockdev writev readv block ...passed 00:07:41.421 Test: blockdev writev readv size > 128k ...passed 00:07:41.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:41.421 Test: blockdev comparev and writev ...[2024-11-18 13:20:37.521995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4402000 len:0x1000 00:07:41.421 [2024-11-18 13:20:37.522037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:41.421 passed 00:07:41.421 Test: blockdev nvme passthru rw ...passed 00:07:41.421 Test: blockdev nvme passthru vendor specific ...passed 00:07:41.421 Test: blockdev nvme admin passthru ...[2024-11-18 13:20:37.522604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:41.422 [2024-11-18 13:20:37.522628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:41.422 passed 00:07:41.422 Test: blockdev copy ...passed 00:07:41.422 Suite: bdevio tests on: Nvme1n1p2 00:07:41.422 Test: blockdev write read block ...passed 00:07:41.680 Test: blockdev write zeroes read block ...passed 00:07:41.680 Test: blockdev write zeroes read no split ...passed 00:07:41.680 Test: blockdev write zeroes read split ...passed 00:07:41.680 Test: blockdev write zeroes read split partial ...passed 00:07:41.680 Test: blockdev reset ...[2024-11-18 13:20:37.563128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:41.680 [2024-11-18 13:20:37.564704] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:41.680 passed 00:07:41.680 Test: blockdev write read 8 blocks ...passed 00:07:41.680 Test: blockdev write read size > 128k ...passed 00:07:41.680 Test: blockdev write read invalid size ...passed 00:07:41.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:41.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:41.680 Test: blockdev write read max offset ...passed 00:07:41.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:41.680 Test: blockdev writev readv 8 blocks ...passed 00:07:41.680 Test: blockdev writev readv 30 x 1block ...passed 00:07:41.680 Test: blockdev writev readv block ...passed 00:07:41.680 Test: blockdev writev readv size > 128k ...passed 00:07:41.680 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:41.680 Test: blockdev comparev and writev ...[2024-11-18 13:20:37.569951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2de83b000 len:0x1000 00:07:41.680 [2024-11-18 13:20:37.569991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:41.680 passed 00:07:41.680 Test: blockdev nvme passthru rw ...passed 00:07:41.680 Test: blockdev nvme passthru vendor specific ...passed 00:07:41.680 Test: blockdev nvme admin passthru ...passed 00:07:41.680 Test: blockdev copy ...passed 00:07:41.680 Suite: bdevio tests on: Nvme1n1p1 00:07:41.680 Test: blockdev write read block ...passed 00:07:41.680 Test: blockdev write zeroes read block ...passed 00:07:41.680 Test: blockdev write zeroes read no split ...passed 00:07:41.680 Test: blockdev write zeroes read split ...passed 00:07:41.680 Test: blockdev write zeroes read split partial ...passed 00:07:41.680 Test: blockdev reset ...[2024-11-18 13:20:37.622036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:41.680 [2024-11-18 13:20:37.623672] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:41.680 passed 00:07:41.680 Test: blockdev write read 8 blocks ...passed 00:07:41.680 Test: blockdev write read size > 128k ...passed 00:07:41.680 Test: blockdev write read invalid size ...passed 00:07:41.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:41.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:41.680 Test: blockdev write read max offset ...passed 00:07:41.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:41.681 Test: blockdev writev readv 8 blocks ...passed 00:07:41.681 Test: blockdev writev readv 30 x 1block ...passed 00:07:41.681 Test: blockdev writev readv block ...passed 00:07:41.681 Test: blockdev writev readv size > 128k ...passed 00:07:41.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:41.681 Test: blockdev comparev and writev ...[2024-11-18 13:20:37.628896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2de837000 len:0x1000 00:07:41.681 [2024-11-18 13:20:37.628937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:41.681 passed 00:07:41.681 Test: blockdev nvme passthru rw ...passed 00:07:41.681 Test: blockdev nvme passthru vendor specific ...passed 00:07:41.681 Test: blockdev nvme admin passthru ...passed 00:07:41.681 Test: blockdev copy ...passed 00:07:41.681 Suite: bdevio tests on: Nvme0n1 00:07:41.681 Test: blockdev write read block ...passed 00:07:41.681 Test: blockdev write zeroes read block ...passed 00:07:41.681 Test: blockdev write zeroes read no split ...passed 00:07:41.681 Test: blockdev write zeroes read split ...passed 00:07:41.681 Test: blockdev write zeroes read split partial ...passed 00:07:41.681 Test: blockdev reset ...[2024-11-18 13:20:37.711912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:41.681 [2024-11-18 13:20:37.713754] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:41.681 passed 00:07:41.681 Test: blockdev write read 8 blocks ...passed 00:07:41.681 Test: blockdev write read size > 128k ...passed 00:07:41.681 Test: blockdev write read invalid size ...passed 00:07:41.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:41.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:41.681 Test: blockdev write read max offset ...passed 00:07:41.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:41.681 Test: blockdev writev readv 8 blocks ...passed 00:07:41.681 Test: blockdev writev readv 30 x 1block ...passed 00:07:41.681 Test: blockdev writev readv block ...passed 00:07:41.681 Test: blockdev writev readv size > 128k ...passed 00:07:41.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:41.681 Test: blockdev comparev and writev ...[2024-11-18 13:20:37.718243] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:41.681 separate metadata which is not supported yet. 00:07:41.681 passed 00:07:41.681 Test: blockdev nvme passthru rw ...passed 00:07:41.681 Test: blockdev nvme passthru vendor specific ...passed 00:07:41.681 Test: blockdev nvme admin passthru ...[2024-11-18 13:20:37.718698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:41.681 [2024-11-18 13:20:37.718740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:41.681 passed 00:07:41.681 Test: blockdev copy ...passed 00:07:41.681 00:07:41.681 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.681 suites 7 7 n/a 0 0 00:07:41.681 tests 161 161 161 0 0 00:07:41.681 asserts 1025 1025 1025 0 n/a 00:07:41.681 00:07:41.681 Elapsed time = 0.845 seconds 00:07:41.681 0 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73366 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73366 ']' 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73366 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73366 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.681 killing process with pid 73366 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73366' 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73366 00:07:41.681 13:20:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73366 00:07:42.247 13:20:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:42.247 00:07:42.247 real 0m2.027s 00:07:42.247 user 0m5.250s 00:07:42.247 sys 0m0.287s 00:07:42.247 ************************************ 00:07:42.247 END TEST bdev_bounds 00:07:42.247 ************************************ 00:07:42.247 13:20:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.247 13:20:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:42.506 13:20:38 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.506 13:20:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.506 13:20:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.506 13:20:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.506 ************************************ 00:07:42.506 START TEST bdev_nbd 00:07:42.506 ************************************ 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73426 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73426 /var/tmp/spdk-nbd.sock 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73426 ']' 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.506 13:20:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:42.506 [2024-11-18 13:20:38.473984] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:42.506 [2024-11-18 13:20:38.474100] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.506 [2024-11-18 13:20:38.630837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.764 [2024-11-18 13:20:38.650097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.330 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:43.587 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.588 1+0 records in 00:07:43.588 1+0 records out 00:07:43.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528306 s, 7.8 MB/s 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.588 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.846 1+0 records in 00:07:43.846 1+0 records out 00:07:43.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502615 s, 8.1 MB/s 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.846 13:20:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.105 1+0 records in 00:07:44.105 1+0 records out 00:07:44.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298586 s, 13.7 MB/s 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.105 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.362 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.362 1+0 records in 00:07:44.362 1+0 records out 00:07:44.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377508 s, 10.9 MB/s 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:44.363 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.624 1+0 records in 00:07:44.624 1+0 records out 00:07:44.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450401 s, 9.1 MB/s 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.624 1+0 records in 00:07:44.624 1+0 records out 00:07:44.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014575 s, 2.8 MB/s 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.624 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.883 1+0 records in 00:07:44.883 1+0 records out 00:07:44.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467898 s, 8.8 MB/s 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.883 13:20:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd0", 00:07:45.141 "bdev_name": "Nvme0n1" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd1", 00:07:45.141 "bdev_name": "Nvme1n1p1" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd2", 00:07:45.141 "bdev_name": "Nvme1n1p2" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd3", 00:07:45.141 "bdev_name": "Nvme2n1" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd4", 00:07:45.141 "bdev_name": "Nvme2n2" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd5", 00:07:45.141 "bdev_name": "Nvme2n3" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd6", 00:07:45.141 "bdev_name": "Nvme3n1" 00:07:45.141 } 00:07:45.141 ]' 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd0", 00:07:45.141 "bdev_name": "Nvme0n1" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd1", 00:07:45.141 "bdev_name": "Nvme1n1p1" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd2", 00:07:45.141 "bdev_name": "Nvme1n1p2" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd3", 00:07:45.141 "bdev_name": "Nvme2n1" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd4", 00:07:45.141 "bdev_name": "Nvme2n2" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd5", 00:07:45.141 "bdev_name": "Nvme2n3" 00:07:45.141 }, 00:07:45.141 { 00:07:45.141 "nbd_device": "/dev/nbd6", 00:07:45.141 "bdev_name": "Nvme3n1" 00:07:45.141 } 00:07:45.141 ]' 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.141 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.399 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.657 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.914 13:20:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.914 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.172 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.430 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.688 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.946 13:20:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:46.946 /dev/nbd0 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.946 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.946 1+0 records in 00:07:46.946 1+0 records out 00:07:46.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290467 s, 14.1 MB/s 00:07:46.947 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.947 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.947 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:47.204 /dev/nbd1 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.204 1+0 records in 00:07:47.204 1+0 records out 00:07:47.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356051 s, 11.5 MB/s 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.204 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:47.462 /dev/nbd10 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.462 1+0 records in 00:07:47.462 1+0 records out 00:07:47.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404432 s, 10.1 MB/s 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.462 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:47.719 /dev/nbd11 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.719 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.720 1+0 records in 00:07:47.720 1+0 records out 00:07:47.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430112 s, 9.5 MB/s 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.720 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:47.978 /dev/nbd12 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.978 1+0 records in 00:07:47.978 1+0 records out 00:07:47.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281549 s, 14.5 MB/s 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.978 13:20:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:48.236 /dev/nbd13 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.236 1+0 records in 00:07:48.236 1+0 records out 00:07:48.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415693 s, 9.9 MB/s 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.236 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:48.536 /dev/nbd14 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.536 1+0 records in 00:07:48.536 1+0 records out 00:07:48.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427632 s, 9.6 MB/s 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd0", 00:07:48.536 "bdev_name": "Nvme0n1" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd1", 00:07:48.536 "bdev_name": "Nvme1n1p1" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd10", 00:07:48.536 "bdev_name": "Nvme1n1p2" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd11", 00:07:48.536 "bdev_name": "Nvme2n1" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd12", 00:07:48.536 "bdev_name": "Nvme2n2" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd13", 00:07:48.536 "bdev_name": "Nvme2n3" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd14", 00:07:48.536 "bdev_name": "Nvme3n1" 00:07:48.536 } 00:07:48.536 ]' 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.536 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd0", 00:07:48.536 "bdev_name": "Nvme0n1" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd1", 00:07:48.536 "bdev_name": "Nvme1n1p1" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd10", 00:07:48.536 "bdev_name": "Nvme1n1p2" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd11", 00:07:48.536 "bdev_name": "Nvme2n1" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd12", 00:07:48.536 "bdev_name": "Nvme2n2" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd13", 00:07:48.536 "bdev_name": "Nvme2n3" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "nbd_device": "/dev/nbd14", 00:07:48.536 "bdev_name": "Nvme3n1" 00:07:48.536 } 00:07:48.536 ]' 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.839 /dev/nbd1 00:07:48.839 /dev/nbd10 00:07:48.839 /dev/nbd11 00:07:48.839 /dev/nbd12 00:07:48.839 /dev/nbd13 00:07:48.839 /dev/nbd14' 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.839 /dev/nbd1 00:07:48.839 /dev/nbd10 00:07:48.839 /dev/nbd11 00:07:48.839 /dev/nbd12 00:07:48.839 /dev/nbd13 00:07:48.839 /dev/nbd14' 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:48.839 256+0 records in 00:07:48.839 256+0 records out 00:07:48.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00861468 s, 122 MB/s 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:48.839 256+0 records in 00:07:48.839 256+0 records out 00:07:48.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114967 s, 9.1 MB/s 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:48.839 256+0 records in 00:07:48.839 256+0 records out 00:07:48.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166461 s, 6.3 MB/s 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.839 13:20:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:49.100 256+0 records in 00:07:49.100 256+0 records out 00:07:49.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.226076 s, 4.6 MB/s 00:07:49.100 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.100 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:49.362 256+0 records in 00:07:49.362 256+0 records out 00:07:49.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189434 s, 5.5 MB/s 00:07:49.362 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.362 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:49.622 256+0 records in 00:07:49.622 256+0 records out 00:07:49.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.226355 s, 4.6 MB/s 00:07:49.622 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.622 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:49.881 256+0 records in 00:07:49.881 256+0 records out 00:07:49.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218727 s, 4.8 MB/s 00:07:49.881 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.881 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:49.881 256+0 records in 00:07:49.881 256+0 records out 00:07:49.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103511 s, 10.1 MB/s 00:07:49.881 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:49.881 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.882 13:20:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.142 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.403 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.662 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.920 13:20:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.180 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.441 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:51.703 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:51.703 malloc_lvol_verify 00:07:51.965 13:20:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:51.965 cb6c1593-2c81-4541-805d-6bcd0b1ce5e7 00:07:51.965 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:52.226 96abf851-8bc5-4527-b77d-d0d23ebd8b85 00:07:52.226 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:52.486 /dev/nbd0 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:52.486 mke2fs 1.47.0 (5-Feb-2023) 00:07:52.486 Discarding device blocks: 0/4096 done 00:07:52.486 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:52.486 00:07:52.486 Allocating group tables: 0/1 done 00:07:52.486 Writing inode tables: 0/1 done 00:07:52.486 Creating journal (1024 blocks): done 00:07:52.486 Writing superblocks and filesystem accounting information: 0/1 done 00:07:52.486 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.486 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73426 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73426 ']' 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73426 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73426 00:07:52.747 killing process with pid 73426 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73426' 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73426 00:07:52.747 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73426 00:07:53.008 ************************************ 00:07:53.008 END TEST bdev_nbd 00:07:53.008 ************************************ 00:07:53.008 13:20:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:53.008 00:07:53.008 real 0m10.493s 00:07:53.008 user 0m14.676s 00:07:53.008 sys 0m3.642s 00:07:53.008 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.008 13:20:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 13:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:53.008 13:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:53.008 skipping fio tests on NVMe due to multi-ns failures. 00:07:53.008 13:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:53.008 13:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:53.008 13:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:53.008 13:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.008 13:20:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:53.008 13:20:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.008 13:20:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 ************************************ 00:07:53.008 START TEST bdev_verify 00:07:53.008 ************************************ 00:07:53.008 13:20:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.008 [2024-11-18 13:20:49.029798] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:53.008 [2024-11-18 13:20:49.029973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73831 ] 00:07:53.269 [2024-11-18 13:20:49.191604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.269 [2024-11-18 13:20:49.212113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.269 [2024-11-18 13:20:49.212160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.529 Running I/O for 5 seconds... 00:07:55.857 20288.00 IOPS, 79.25 MiB/s [2024-11-18T13:20:52.929Z] 19264.00 IOPS, 75.25 MiB/s [2024-11-18T13:20:53.872Z] 19584.00 IOPS, 76.50 MiB/s [2024-11-18T13:20:54.813Z] 19264.00 IOPS, 75.25 MiB/s [2024-11-18T13:20:54.813Z] 19020.80 IOPS, 74.30 MiB/s 00:07:58.685 Latency(us) 00:07:58.685 [2024-11-18T13:20:54.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.685 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x0 length 0xbd0bd 00:07:58.685 Nvme0n1 : 5.09 1357.65 5.30 0.00 0.00 94064.78 16837.71 80256.39 00:07:58.685 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:58.685 Nvme0n1 : 5.09 1332.77 5.21 0.00 0.00 95807.07 16938.54 83886.08 00:07:58.685 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x0 length 0x4ff80 00:07:58.685 Nvme1n1p1 : 5.09 1356.93 5.30 0.00 0.00 93963.20 19963.27 76223.41 00:07:58.685 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:58.685 Nvme1n1p1 : 5.09 1332.38 5.20 0.00 0.00 95550.50 18450.90 79449.80 00:07:58.685 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x0 length 0x4ff7f 00:07:58.685 Nvme1n1p2 : 5.10 1356.55 5.30 0.00 0.00 93829.66 19358.33 75820.11 00:07:58.685 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:58.685 Nvme1n1p2 : 5.10 1331.25 5.20 0.00 0.00 95356.83 21273.99 79046.50 00:07:58.685 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x0 length 0x80000 00:07:58.685 Nvme2n1 : 5.10 1355.62 5.30 0.00 0.00 93735.92 21576.47 75013.51 00:07:58.685 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x80000 length 0x80000 00:07:58.685 Nvme2n1 : 5.10 1330.75 5.20 0.00 0.00 95159.86 21576.47 80659.69 00:07:58.685 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x0 length 0x80000 00:07:58.685 Nvme2n2 : 5.10 1354.91 5.29 0.00 0.00 93600.32 22483.89 74610.22 00:07:58.685 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x80000 length 0x80000 00:07:58.685 Nvme2n2 : 5.10 1330.40 5.20 0.00 0.00 94995.80 18551.73 81869.59 00:07:58.685 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.685 Verification LBA range: start 0x0 length 0x80000 00:07:58.685 Nvme2n3 : 5.10 1354.23 5.29 0.00 0.00 93430.76 20064.10 75416.81 00:07:58.686 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.686 Verification LBA range: start 0x80000 length 0x80000 00:07:58.686 Nvme2n3 : 5.10 1329.70 5.19 0.00 0.00 94854.17 12351.02 81466.29 00:07:58.686 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.686 Verification LBA range: start 0x0 length 0x20000 00:07:58.686 Nvme3n1 : 5.11 1353.55 5.29 0.00 0.00 93252.34 12300.60 75820.11 00:07:58.686 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.686 Verification LBA range: start 0x20000 length 0x20000 00:07:58.686 Nvme3n1 : 5.10 1329.02 5.19 0.00 0.00 94780.61 8922.98 82272.89 00:07:58.686 [2024-11-18T13:20:54.814Z] =================================================================================================================== 00:07:58.686 [2024-11-18T13:20:54.814Z] Total : 18805.72 73.46 0.00 0.00 94448.75 8922.98 83886.08 00:07:59.626 00:07:59.626 real 0m6.683s 00:07:59.626 user 0m12.651s 00:07:59.626 sys 0m0.202s 00:07:59.626 13:20:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.626 ************************************ 00:07:59.626 END TEST bdev_verify 00:07:59.626 ************************************ 00:07:59.626 13:20:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 13:20:55 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.626 13:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:59.626 13:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.626 13:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 ************************************ 00:07:59.626 START TEST bdev_verify_big_io 00:07:59.626 ************************************ 00:07:59.626 13:20:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.887 [2024-11-18 13:20:55.775530] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:07:59.887 [2024-11-18 13:20:55.775642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73923 ] 00:07:59.887 [2024-11-18 13:20:55.934622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.887 [2024-11-18 13:20:55.955313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.887 [2024-11-18 13:20:55.955424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.459 Running I/O for 5 seconds... 00:08:06.560 1470.00 IOPS, 91.88 MiB/s [2024-11-18T13:21:02.945Z] 3418.00 IOPS, 213.62 MiB/s 00:08:06.817 Latency(us) 00:08:06.817 [2024-11-18T13:21:02.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.817 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x0 length 0xbd0b 00:08:06.817 Nvme0n1 : 5.90 103.84 6.49 0.00 0.00 1161282.13 20265.75 1445421.69 00:08:06.817 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:06.817 Nvme0n1 : 6.35 139.01 8.69 0.00 0.00 714643.26 510.42 1871304.86 00:08:06.817 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x0 length 0x4ff8 00:08:06.817 Nvme1n1p1 : 6.00 99.87 6.24 0.00 0.00 1182067.79 57268.38 1910021.51 00:08:06.817 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:06.817 Nvme1n1p1 : 5.98 88.49 5.53 0.00 0.00 1342930.03 22282.24 1664816.05 00:08:06.817 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x0 length 0x4ff7 00:08:06.817 Nvme1n1p2 : 6.00 96.01 6.00 0.00 0.00 1182188.92 91145.45 1664816.05 00:08:06.817 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:06.817 Nvme1n1p2 : 6.12 92.18 5.76 0.00 0.00 1257784.22 30045.74 1703532.70 00:08:06.817 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x0 length 0x8000 00:08:06.817 Nvme2n1 : 6.00 111.81 6.99 0.00 0.00 992438.78 93161.94 1071160.71 00:08:06.817 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x8000 length 0x8000 00:08:06.817 Nvme2n1 : 6.12 92.33 5.77 0.00 0.00 1208143.71 39321.60 1742249.35 00:08:06.817 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.817 Verification LBA range: start 0x0 length 0x8000 00:08:06.817 Nvme2n2 : 6.18 119.27 7.45 0.00 0.00 898558.27 60898.07 1213121.77 00:08:06.818 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.818 Verification LBA range: start 0x8000 length 0x8000 00:08:06.818 Nvme2n2 : 6.16 99.46 6.22 0.00 0.00 1109967.66 40934.79 1768060.46 00:08:06.818 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.818 Verification LBA range: start 0x0 length 0x8000 00:08:06.818 Nvme2n3 : 6.25 128.09 8.01 0.00 0.00 808663.39 35490.26 948557.98 00:08:06.818 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.818 Verification LBA range: start 0x8000 length 0x8000 00:08:06.818 Nvme2n3 : 6.25 105.61 6.60 0.00 0.00 1003862.27 27021.00 1793871.56 00:08:06.818 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.818 Verification LBA range: start 0x0 length 0x2000 00:08:06.818 Nvme3n1 : 6.31 147.14 9.20 0.00 0.00 684897.40 1940.87 1090519.04 00:08:06.818 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.818 Verification LBA range: start 0x2000 length 0x2000 00:08:06.818 Nvme3n1 : 6.25 112.62 7.04 0.00 0.00 906797.69 44766.13 1593835.52 00:08:06.818 [2024-11-18T13:21:02.946Z] =================================================================================================================== 00:08:06.818 [2024-11-18T13:21:02.946Z] Total : 1535.73 95.98 0.00 0.00 998286.17 510.42 1910021.51 00:08:07.750 ************************************ 00:08:07.750 END TEST bdev_verify_big_io 00:08:07.750 ************************************ 00:08:07.750 00:08:07.750 real 0m7.928s 00:08:07.750 user 0m15.111s 00:08:07.750 sys 0m0.225s 00:08:07.750 13:21:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.750 13:21:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:07.750 13:21:03 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.750 13:21:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:07.750 13:21:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.750 13:21:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:07.750 ************************************ 00:08:07.750 START TEST bdev_write_zeroes 00:08:07.750 ************************************ 00:08:07.750 13:21:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.750 [2024-11-18 13:21:03.739867] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:08:07.750 [2024-11-18 13:21:03.739971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74027 ] 00:08:08.007 [2024-11-18 13:21:03.895079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.007 [2024-11-18 13:21:03.915825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.264 Running I/O for 1 seconds... 00:08:09.897 27580.00 IOPS, 107.73 MiB/s 00:08:09.897 Latency(us) 00:08:09.897 [2024-11-18T13:21:06.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.897 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.897 Nvme0n1 : 1.51 2491.92 9.73 0.00 0.00 44187.88 7108.14 571070.62 00:08:09.897 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.897 Nvme1n1p1 : 1.06 3871.86 15.12 0.00 0.00 32952.62 11090.71 189550.28 00:08:09.897 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.897 Nvme1n1p2 : 1.06 3866.89 15.11 0.00 0.00 32914.62 8721.33 190356.87 00:08:09.897 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.897 Nvme2n1 : 1.06 3862.47 15.09 0.00 0.00 32893.16 8721.33 189550.28 00:08:09.897 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.897 Nvme2n2 : 1.06 3858.16 15.07 0.00 0.00 32866.94 8368.44 183097.50 00:08:09.897 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.897 Nvme2n3 : 1.06 3853.64 15.05 0.00 0.00 32837.53 7158.55 182290.90 00:08:09.897 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.897 Nvme3n1 : 1.06 3969.68 15.51 0.00 0.00 31837.68 7208.96 183097.50 00:08:09.897 [2024-11-18T13:21:06.025Z] =================================================================================================================== 00:08:09.897 [2024-11-18T13:21:06.025Z] Total : 25774.63 100.68 0.00 0.00 34232.59 7108.14 571070.62 00:08:10.837 00:08:10.837 real 0m3.269s 00:08:10.837 user 0m2.925s 00:08:10.837 sys 0m0.232s 00:08:10.837 13:21:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.837 13:21:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:10.837 ************************************ 00:08:10.837 END TEST bdev_write_zeroes 00:08:10.837 ************************************ 00:08:11.096 13:21:06 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.096 13:21:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:11.096 13:21:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.096 13:21:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.096 ************************************ 00:08:11.096 START TEST bdev_json_nonenclosed 00:08:11.096 ************************************ 00:08:11.096 13:21:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.096 [2024-11-18 13:21:07.058282] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:08:11.096 [2024-11-18 13:21:07.058391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74080 ] 00:08:11.096 [2024-11-18 13:21:07.216267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.354 [2024-11-18 13:21:07.235569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.354 [2024-11-18 13:21:07.235648] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:11.354 [2024-11-18 13:21:07.235666] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.354 [2024-11-18 13:21:07.235680] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.354 00:08:11.354 real 0m0.306s 00:08:11.354 user 0m0.122s 00:08:11.354 sys 0m0.080s 00:08:11.354 13:21:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.354 13:21:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 ************************************ 00:08:11.354 END TEST bdev_json_nonenclosed 00:08:11.354 ************************************ 00:08:11.354 13:21:07 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.354 13:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:11.354 13:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.354 13:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 ************************************ 00:08:11.354 START TEST bdev_json_nonarray 00:08:11.354 ************************************ 00:08:11.354 13:21:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.354 [2024-11-18 13:21:07.431203] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:08:11.354 [2024-11-18 13:21:07.431336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74100 ] 00:08:11.611 [2024-11-18 13:21:07.591347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.611 [2024-11-18 13:21:07.612810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.611 [2024-11-18 13:21:07.612900] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:11.611 [2024-11-18 13:21:07.612919] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.611 [2024-11-18 13:21:07.612933] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.611 00:08:11.611 real 0m0.317s 00:08:11.611 user 0m0.125s 00:08:11.611 sys 0m0.088s 00:08:11.611 13:21:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.611 ************************************ 00:08:11.611 END TEST bdev_json_nonarray 00:08:11.611 ************************************ 00:08:11.611 13:21:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:11.611 13:21:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:11.611 13:21:07 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:11.612 13:21:07 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:11.612 13:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.612 13:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.612 13:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.612 ************************************ 00:08:11.612 START TEST bdev_gpt_uuid 00:08:11.612 ************************************ 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74120 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74120 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74120 ']' 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.612 13:21:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.871 [2024-11-18 13:21:07.812450] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:08:11.871 [2024-11-18 13:21:07.812613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74120 ] 00:08:11.871 [2024-11-18 13:21:07.971830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.871 [2024-11-18 13:21:07.992845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.812 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.812 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:12.812 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.812 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.812 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.073 Some configs were skipped because the RPC state that can call them passed over. 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:13.073 { 00:08:13.073 "name": "Nvme1n1p1", 00:08:13.073 "aliases": [ 00:08:13.073 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:13.073 ], 00:08:13.073 "product_name": "GPT Disk", 00:08:13.073 "block_size": 4096, 00:08:13.073 "num_blocks": 655104, 00:08:13.073 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:13.073 "assigned_rate_limits": { 00:08:13.073 "rw_ios_per_sec": 0, 00:08:13.073 "rw_mbytes_per_sec": 0, 00:08:13.073 "r_mbytes_per_sec": 0, 00:08:13.073 "w_mbytes_per_sec": 0 00:08:13.073 }, 00:08:13.073 "claimed": false, 00:08:13.073 "zoned": false, 00:08:13.073 "supported_io_types": { 00:08:13.073 "read": true, 00:08:13.073 "write": true, 00:08:13.073 "unmap": true, 00:08:13.073 "flush": true, 00:08:13.073 "reset": true, 00:08:13.073 "nvme_admin": false, 00:08:13.073 "nvme_io": false, 00:08:13.073 "nvme_io_md": false, 00:08:13.073 "write_zeroes": true, 00:08:13.073 "zcopy": false, 00:08:13.073 "get_zone_info": false, 00:08:13.073 "zone_management": false, 00:08:13.073 "zone_append": false, 00:08:13.073 "compare": true, 00:08:13.073 "compare_and_write": false, 00:08:13.073 "abort": true, 00:08:13.073 "seek_hole": false, 00:08:13.073 "seek_data": false, 00:08:13.073 "copy": true, 00:08:13.073 "nvme_iov_md": false 00:08:13.073 }, 00:08:13.073 "driver_specific": { 00:08:13.073 "gpt": { 00:08:13.073 "base_bdev": "Nvme1n1", 00:08:13.073 "offset_blocks": 256, 00:08:13.073 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:13.073 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:13.073 "partition_name": "SPDK_TEST_first" 00:08:13.073 } 00:08:13.073 } 00:08:13.073 } 00:08:13.073 ]' 00:08:13.073 13:21:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:13.073 { 00:08:13.073 "name": "Nvme1n1p2", 00:08:13.073 "aliases": [ 00:08:13.073 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:13.073 ], 00:08:13.073 "product_name": "GPT Disk", 00:08:13.073 "block_size": 4096, 00:08:13.073 "num_blocks": 655103, 00:08:13.073 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:13.073 "assigned_rate_limits": { 00:08:13.073 "rw_ios_per_sec": 0, 00:08:13.073 "rw_mbytes_per_sec": 0, 00:08:13.073 "r_mbytes_per_sec": 0, 00:08:13.073 "w_mbytes_per_sec": 0 00:08:13.073 }, 00:08:13.073 "claimed": false, 00:08:13.073 "zoned": false, 00:08:13.073 "supported_io_types": { 00:08:13.073 "read": true, 00:08:13.073 "write": true, 00:08:13.073 "unmap": true, 00:08:13.073 "flush": true, 00:08:13.073 "reset": true, 00:08:13.073 "nvme_admin": false, 00:08:13.073 "nvme_io": false, 00:08:13.073 "nvme_io_md": false, 00:08:13.073 "write_zeroes": true, 00:08:13.073 "zcopy": false, 00:08:13.073 "get_zone_info": false, 00:08:13.073 "zone_management": false, 00:08:13.073 "zone_append": false, 00:08:13.073 "compare": true, 00:08:13.073 "compare_and_write": false, 00:08:13.073 "abort": true, 00:08:13.073 "seek_hole": false, 00:08:13.073 "seek_data": false, 00:08:13.073 "copy": true, 00:08:13.073 "nvme_iov_md": false 00:08:13.073 }, 00:08:13.073 "driver_specific": { 00:08:13.073 "gpt": { 00:08:13.073 "base_bdev": "Nvme1n1", 00:08:13.073 "offset_blocks": 655360, 00:08:13.073 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:13.073 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:13.073 "partition_name": "SPDK_TEST_second" 00:08:13.073 } 00:08:13.073 } 00:08:13.073 } 00:08:13.073 ]' 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:13.073 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74120 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74120 ']' 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74120 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74120 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.333 killing process with pid 74120 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74120' 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74120 00:08:13.333 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74120 00:08:13.594 00:08:13.594 real 0m1.748s 00:08:13.594 user 0m1.922s 00:08:13.594 sys 0m0.343s 00:08:13.594 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.594 ************************************ 00:08:13.594 END TEST bdev_gpt_uuid 00:08:13.594 13:21:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.594 ************************************ 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:13.594 13:21:09 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:13.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:13.855 Waiting for block devices as requested 00:08:14.113 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.113 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.113 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.113 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:19.396 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:19.396 13:21:15 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:19.396 13:21:15 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:19.657 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:19.657 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:19.657 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:19.657 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:19.657 13:21:15 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:19.657 00:08:19.657 real 0m50.639s 00:08:19.657 user 1m4.860s 00:08:19.657 sys 0m7.668s 00:08:19.657 13:21:15 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.657 ************************************ 00:08:19.657 END TEST blockdev_nvme_gpt 00:08:19.657 ************************************ 00:08:19.657 13:21:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.657 13:21:15 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:19.657 13:21:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.657 13:21:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.657 13:21:15 -- common/autotest_common.sh@10 -- # set +x 00:08:19.657 ************************************ 00:08:19.657 START TEST nvme 00:08:19.657 ************************************ 00:08:19.657 13:21:15 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:19.657 * Looking for test storage... 00:08:19.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:19.657 13:21:15 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.657 13:21:15 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.657 13:21:15 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.918 13:21:15 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.918 13:21:15 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.918 13:21:15 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.918 13:21:15 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.918 13:21:15 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.918 13:21:15 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.918 13:21:15 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.918 13:21:15 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.918 13:21:15 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:19.918 13:21:15 nvme -- scripts/common.sh@345 -- # : 1 00:08:19.918 13:21:15 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.918 13:21:15 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.918 13:21:15 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:19.918 13:21:15 nvme -- scripts/common.sh@353 -- # local d=1 00:08:19.918 13:21:15 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.918 13:21:15 nvme -- scripts/common.sh@355 -- # echo 1 00:08:19.918 13:21:15 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.918 13:21:15 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@353 -- # local d=2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.918 13:21:15 nvme -- scripts/common.sh@355 -- # echo 2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.918 13:21:15 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.918 13:21:15 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.918 13:21:15 nvme -- scripts/common.sh@368 -- # return 0 00:08:19.918 13:21:15 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.918 13:21:15 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.918 --rc genhtml_branch_coverage=1 00:08:19.918 --rc genhtml_function_coverage=1 00:08:19.918 --rc genhtml_legend=1 00:08:19.918 --rc geninfo_all_blocks=1 00:08:19.918 --rc geninfo_unexecuted_blocks=1 00:08:19.918 00:08:19.918 ' 00:08:19.918 13:21:15 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.918 --rc genhtml_branch_coverage=1 00:08:19.918 --rc genhtml_function_coverage=1 00:08:19.918 --rc genhtml_legend=1 00:08:19.918 --rc geninfo_all_blocks=1 00:08:19.918 --rc geninfo_unexecuted_blocks=1 00:08:19.918 00:08:19.918 ' 00:08:19.918 13:21:15 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.918 --rc genhtml_branch_coverage=1 00:08:19.918 --rc genhtml_function_coverage=1 00:08:19.918 --rc genhtml_legend=1 00:08:19.918 --rc geninfo_all_blocks=1 00:08:19.918 --rc geninfo_unexecuted_blocks=1 00:08:19.918 00:08:19.918 ' 00:08:19.918 13:21:15 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.918 --rc genhtml_branch_coverage=1 00:08:19.918 --rc genhtml_function_coverage=1 00:08:19.918 --rc genhtml_legend=1 00:08:19.918 --rc geninfo_all_blocks=1 00:08:19.918 --rc geninfo_unexecuted_blocks=1 00:08:19.918 00:08:19.918 ' 00:08:19.918 13:21:15 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:20.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:20.752 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.752 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.013 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.013 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.013 13:21:17 nvme -- nvme/nvme.sh@79 -- # uname 00:08:21.013 13:21:17 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:21.013 13:21:17 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:21.013 13:21:17 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:21.013 Waiting for stub to ready for secondary processes... 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1075 -- # stubpid=74745 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74745 ]] 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:21.013 13:21:17 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:21.013 [2024-11-18 13:21:17.050858] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:08:21.013 [2024-11-18 13:21:17.051015] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:21.956 13:21:18 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:21.956 13:21:18 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74745 ]] 00:08:21.956 13:21:18 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:21.956 [2024-11-18 13:21:18.080157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.218 [2024-11-18 13:21:18.099060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.218 [2024-11-18 13:21:18.099506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.218 [2024-11-18 13:21:18.099596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.218 [2024-11-18 13:21:18.112993] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:22.218 [2024-11-18 13:21:18.113037] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:22.218 [2024-11-18 13:21:18.126790] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:22.218 [2024-11-18 13:21:18.126969] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:22.218 [2024-11-18 13:21:18.128292] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:22.218 [2024-11-18 13:21:18.128521] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:22.218 [2024-11-18 13:21:18.128588] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:22.218 [2024-11-18 13:21:18.129713] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:22.218 [2024-11-18 13:21:18.130024] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:22.218 [2024-11-18 13:21:18.130108] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:22.218 [2024-11-18 13:21:18.132760] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:22.218 [2024-11-18 13:21:18.133159] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:22.218 [2024-11-18 13:21:18.133356] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:22.218 [2024-11-18 13:21:18.133466] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:22.218 [2024-11-18 13:21:18.133580] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:23.163 done. 00:08:23.163 13:21:19 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:23.163 13:21:19 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:23.163 13:21:19 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:23.163 13:21:19 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:23.163 13:21:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.163 13:21:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.163 ************************************ 00:08:23.163 START TEST nvme_reset 00:08:23.163 ************************************ 00:08:23.163 13:21:19 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:23.163 Initializing NVMe Controllers 00:08:23.163 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:23.163 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:23.164 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:23.164 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:23.164 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:23.164 00:08:23.164 real 0m0.211s 00:08:23.164 user 0m0.072s 00:08:23.164 sys 0m0.093s 00:08:23.164 13:21:19 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.164 ************************************ 00:08:23.164 END TEST nvme_reset 00:08:23.164 ************************************ 00:08:23.164 13:21:19 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:23.468 13:21:19 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:23.468 13:21:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.468 13:21:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.468 13:21:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.468 ************************************ 00:08:23.468 START TEST nvme_identify 00:08:23.468 ************************************ 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:23.468 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:23.468 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:23.468 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:23.468 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:23.468 13:21:19 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:23.468 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:23.468 ===================================================== 00:08:23.468 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.468 ===================================================== 00:08:23.468 Controller Capabilities/Features 00:08:23.468 ================================ 00:08:23.468 Vendor ID: 1b36 00:08:23.468 Subsystem Vendor ID: 1af4 00:08:23.468 Serial Number: 12340 00:08:23.468 Model Number: QEMU NVMe Ctrl 00:08:23.468 Firmware Version: 8.0.0 00:08:23.468 Recommended Arb Burst: 6 00:08:23.468 IEEE OUI Identifier: 00 54 52 00:08:23.468 Multi-path I/O 00:08:23.468 May have multiple subsystem ports: No 00:08:23.468 May have multiple controllers: No 00:08:23.468 Associated with SR-IOV VF: No 00:08:23.468 Max Data Transfer Size: 524288 00:08:23.468 Max Number of Namespaces: 256 00:08:23.468 Max Number of I/O Queues: 64 00:08:23.468 NVMe Specification Version (VS): 1.4 00:08:23.468 NVMe Specification Version (Identify): 1.4 00:08:23.468 Maximum Queue Entries: 2048 00:08:23.468 Contiguous Queues Required: Yes 00:08:23.468 Arbitration Mechanisms Supported 00:08:23.468 Weighted Round Robin: Not Supported 00:08:23.468 Vendor Specific: Not Supported 00:08:23.468 Reset Timeout: 7500 ms 00:08:23.468 Doorbell Stride: 4 bytes 00:08:23.468 NVM Subsystem Reset: Not Supported 00:08:23.468 Command Sets Supported 00:08:23.468 NVM Command Set: Supported 00:08:23.468 Boot Partition: Not Supported 00:08:23.468 Memory Page Size Minimum: 4096 bytes 00:08:23.468 Memory Page Size Maximum: 65536 bytes 00:08:23.468 Persistent Memory Region: Not Supported 00:08:23.468 Optional Asynchronous Events Supported 00:08:23.468 Namespace Attribute Notices: Supported 00:08:23.468 Firmware Activation Notices: Not Supported 00:08:23.468 ANA Change Notices: Not Supported 00:08:23.468 PLE Aggregate Log Change Notices: Not Supported 00:08:23.468 LBA Status Info Alert Notices: Not Supported 00:08:23.468 EGE Aggregate Log Change Notices: Not Supported 00:08:23.468 Normal NVM Subsystem Shutdown event: Not Supported 00:08:23.468 Zone Descriptor Change Notices: Not Supported 00:08:23.468 Discovery Log Change Notices: Not Supported 00:08:23.468 Controller Attributes 00:08:23.468 128-bit Host Identifier: Not Supported 00:08:23.468 Non-Operational Permissive Mode: Not Supported 00:08:23.468 NVM Sets: Not Supported 00:08:23.468 Read Recovery Levels: Not Supported 00:08:23.468 Endurance Groups: Not Supported 00:08:23.468 Predictable Latency Mode: Not Supported 00:08:23.468 Traffic Based Keep ALive: Not Supported 00:08:23.468 Namespace Granularity: Not Supported 00:08:23.468 SQ Associations: Not Supported 00:08:23.468 UUID List: Not Supported 00:08:23.468 Multi-Domain Subsystem: Not Supported 00:08:23.468 Fixed Capacity Management: Not Supported 00:08:23.468 Variable Capacity Management: Not Supported 00:08:23.468 Delete Endurance Group: Not Supported 00:08:23.468 Delete NVM Set: Not Supported 00:08:23.468 Extended LBA Formats Supported: Supported 00:08:23.468 Flexible Data Placement Supported: Not Supported 00:08:23.468 00:08:23.468 Controller Memory Buffer Support 00:08:23.468 ================================ 00:08:23.468 Supported: No 00:08:23.468 00:08:23.468 Persistent Memory Region Support 00:08:23.468 ================================ 00:08:23.468 Supported: No 00:08:23.468 00:08:23.468 Admin Command Set Attributes 00:08:23.468 ============================ 00:08:23.468 Security Send/Receive: Not Supported 00:08:23.468 Format NVM: Supported 00:08:23.468 Firmware Activate/Download: Not Supported 00:08:23.468 Namespace Management: Supported 00:08:23.468 Device Self-Test: Not Supported 00:08:23.468 Directives: Supported 00:08:23.468 NVMe-MI: Not Supported 00:08:23.468 Virtualization Management: Not Supported 00:08:23.468 Doorbell Buffer Config: Supported 00:08:23.468 Get LBA Status Capability: Not Supported 00:08:23.468 Command & Feature Lockdown Capability: Not Supported 00:08:23.468 Abort Command Limit: 4 00:08:23.468 Async Event Request Limit: 4 00:08:23.468 Number of Firmware Slots: N/A 00:08:23.468 Firmware Slot 1 Read-Only: N/A 00:08:23.468 Firmware Activation Without Reset: N/A 00:08:23.468 Multiple Update Detection Support: N/A 00:08:23.468 Firmware Update Granularity: No Information Provided 00:08:23.468 Per-Namespace SMART Log: Yes 00:08:23.468 Asymmetric Namespace Access Log Page: Not Supported 00:08:23.468 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:23.468 Command Effects Log Page: Supported 00:08:23.468 Get Log Page Extended Data: Supported 00:08:23.468 Telemetry Log Pages: Not Supported 00:08:23.468 Persistent Event Log Pages: Not Supported 00:08:23.468 Supported Log Pages Log Page: May Support 00:08:23.468 Commands Supported & Effects Log Page: Not Supported 00:08:23.468 Feature Identifiers & Effects Log Page:May Support 00:08:23.468 NVMe-MI Commands & Effects Log Page: May Support 00:08:23.468 Data Area 4 for Telemetry Log: Not Supported 00:08:23.468 Error Log Page Entries Supported: 1 00:08:23.468 Keep Alive: Not Supported 00:08:23.468 00:08:23.468 NVM Command Set Attributes 00:08:23.468 ========================== 00:08:23.468 Submission Queue Entry Size 00:08:23.468 Max: 64 00:08:23.468 Min: 64 00:08:23.468 Completion Queue Entry Size 00:08:23.468 Max: 16 00:08:23.468 Min: 16 00:08:23.468 Number of Namespaces: 256 00:08:23.468 Compare Command: Supported 00:08:23.468 Write Uncorrectable Command: Not Supported 00:08:23.468 Dataset Management Command: Supported 00:08:23.468 Write Zeroes Command: Supported 00:08:23.468 Set Features Save Field: Supported 00:08:23.468 Reservations: Not Supported 00:08:23.468 Timestamp: Supported 00:08:23.468 Copy: Supported 00:08:23.468 Volatile Write Cache: Present 00:08:23.468 Atomic Write Unit (Normal): 1 00:08:23.468 Atomic Write Unit (PFail): 1 00:08:23.468 Atomic Compare & Write Unit: 1 00:08:23.468 Fused Compare & Write: Not Supported 00:08:23.468 Scatter-Gather List 00:08:23.468 SGL Command Set: Supported 00:08:23.468 SGL Keyed: Not Supported 00:08:23.468 SGL Bit Bucket Descriptor: Not Supported 00:08:23.468 SGL Metadata Pointer: Not Supported 00:08:23.468 Oversized SGL: Not Supported 00:08:23.468 SGL Metadata Address: Not Supported 00:08:23.468 SGL Offset: Not Supported 00:08:23.469 Transport SGL Data Block: Not Supported 00:08:23.469 Replay Protected Memory Block: Not Supported 00:08:23.469 00:08:23.469 Firmware Slot Information 00:08:23.469 ========================= 00:08:23.469 Active slot: 1 00:08:23.469 Slot 1 Firmware Revision: 1.0 00:08:23.469 00:08:23.469 00:08:23.469 Commands Supported and Effects 00:08:23.469 ============================== 00:08:23.469 Admin Commands 00:08:23.469 -------------- 00:08:23.469 Delete I/O Submission Queue (00h): Supported 00:08:23.469 Create I/O Submission Queue (01h): Supported 00:08:23.469 Get Log Page (02h): Supported 00:08:23.469 Delete I/O Completion Queue (04h): Supported 00:08:23.469 Create I/O Completion Queue (05h): Supported 00:08:23.469 Identify (06h): Supported 00:08:23.469 Abort (08h): Supported 00:08:23.469 Set Features (09h): Supported 00:08:23.469 Get Features (0Ah): Supported 00:08:23.469 Asynchronous Event Request (0Ch): Supported 00:08:23.469 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:23.469 Directive Send (19h): Supported 00:08:23.469 Directive Receive (1Ah): Supported 00:08:23.469 Virtualization Management (1Ch): Supported 00:08:23.469 Doorbell Buffer Config (7Ch): Supported 00:08:23.469 Format NVM (80h): Supported LBA-Change 00:08:23.469 I/O Commands 00:08:23.469 ------------ 00:08:23.469 Flush (00h): Supported LBA-Change 00:08:23.469 Write (01h): Supported LBA-Change 00:08:23.469 Read (02h): Supported 00:08:23.469 Compare (05h): Supported 00:08:23.469 Write Zeroes (08h): Supported LBA-Change 00:08:23.469 Dataset Management (09h): Supported LBA-Change 00:08:23.469 Unknown (0Ch): Supported 00:08:23.469 Unknown (12h): Supported 00:08:23.469 Copy (19h): Supported LBA-Change 00:08:23.469 Unknown (1Dh): Supported LBA-Change 00:08:23.469 00:08:23.469 Error Log 00:08:23.469 ========= 00:08:23.469 00:08:23.469 Arbitration 00:08:23.469 =========== 00:08:23.469 Arbitration Burst: no limit 00:08:23.469 00:08:23.469 Power Management 00:08:23.469 ================ 00:08:23.469 Number of Power States: 1 00:08:23.469 Current Power State: Power State #0 00:08:23.469 Power State #0: 00:08:23.469 Max Power: 25.00 W 00:08:23.469 Non-Operational State: Operational 00:08:23.469 Entry Latency: 16 microseconds 00:08:23.469 Exit Latency: 4 microseconds 00:08:23.469 Relative Read Throughput: 0 00:08:23.469 Relative Read Latency: 0 00:08:23.469 Relative Write Throughput: 0 00:08:23.469 Relative Write Latency: 0 00:08:23.469 Idle Power: Not Reported 00:08:23.469 Active Power: Not Reported 00:08:23.469 Non-Operational Permissive Mode: Not Supported 00:08:23.469 00:08:23.469 Health Information 00:08:23.469 ================== 00:08:23.469 Critical Warnings: 00:08:23.469 Available Spare Space: OK 00:08:23.469 Temperature: OK 00:08:23.469 Device Reliability: OK 00:08:23.469 Read Only: No 00:08:23.469 Volatile Memory Backup: OK 00:08:23.469 Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.469 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:23.469 Available Spare: 0% 00:08:23.469 Available Spare Threshold: 0% 00:08:23.469 Life Percentage Used: 0% 00:08:23.469 Data Units Read: 669 00:08:23.469 Data Units Written: 597 00:08:23.469 Host Read Commands: 33565 00:08:23.469 Host Write Commands: 33351 00:08:23.469 Controller Busy Time: 0 minutes 00:08:23.469 Power Cycles: 0 00:08:23.469 Power On Hours: 0 hours 00:08:23.469 Unsafe Shutdowns: 0 00:08:23.469 Unrecoverable Media Errors: 0 00:08:23.469 Lifetime Error Log Entries: 0 00:08:23.469 Warning Temperature Time: 0 minutes 00:08:23.469 Critical Temperature Time: 0 minutes 00:08:23.469 00:08:23.469 Number of Queues 00:08:23.469 ================ 00:08:23.469 Number of I/O Submission Queues: 64 00:08:23.469 Number of I/O Completion Queues: 64 00:08:23.469 00:08:23.469 ZNS Specific Controller Data 00:08:23.469 ============================ 00:08:23.469 Zone Append Size Limit: 0 00:08:23.469 00:08:23.469 00:08:23.469 Active Namespaces 00:08:23.469 ================= 00:08:23.469 Namespace ID:1 00:08:23.469 Error Recovery Timeout: Unlimited 00:08:23.469 Command Set Identifier: NVM (00h) 00:08:23.469 Deallocate: Supported 00:08:23.469 Deallocated/Unwritten Error: Supported 00:08:23.469 Deallocated Read Value: All 0x00 00:08:23.469 Deallocate in Write Zeroes: Not Supported 00:08:23.469 Deallocated Guard Field: 0xFFFF 00:08:23.469 Flush: Supported 00:08:23.469 Reservation: Not Supported 00:08:23.469 Metadata Transferred as: Separate Metadata Buffer 00:08:23.469 Namespace Sharing Capabilities: Private 00:08:23.469 Size (in LBAs): 1548666 (5GiB) 00:08:23.469 Capacity (in LBAs): 1548666 (5GiB) 00:08:23.469 Utilization (in LBAs): 1548666 (5GiB) 00:08:23.469 Thin Provisioning: Not Supported 00:08:23.469 Per-NS Atomic Units: No 00:08:23.469 Maximum Single Source Range Length: 128 00:08:23.469 Maximum Copy Length: 128 00:08:23.469 Maximum Source Range Count: 128 00:08:23.469 NGUID/EUI64 Never Reused: No 00:08:23.469 Namespace Write Protected: No 00:08:23.469 Number of LBA Formats: 8 00:08:23.469 Current LBA Format: LBA Format #07 00:08:23.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.469 00:08:23.469 NVM Specific Namespace Data 00:08:23.469 =========================== 00:08:23.469 Logical Block Storage Tag Mask: 0 00:08:23.469 Protection Information Capabilities: 00:08:23.469 16b Guard Protection Information Storage Tag Support: No 00:08:23.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.469 Storage Tag Check Read Support: No 00:08:23.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.469 ===================================================== 00:08:23.469 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.469 ===================================================== 00:08:23.469 Controller Capabilities/Features 00:08:23.469 ================================ 00:08:23.469 Vendor ID: 1b36 00:08:23.469 Subsystem Vendor ID: 1af4 00:08:23.469 Serial Number: 12341 00:08:23.469 Model Number: QEMU NVMe Ctrl 00:08:23.469 Firmware Version: 8.0.0 00:08:23.469 Recommended Arb Burst: 6 00:08:23.469 IEEE OUI Identifier: 00 54 52 00:08:23.469 Multi-path I/O 00:08:23.469 May have multiple subsystem ports: No 00:08:23.469 May have multiple controllers: No 00:08:23.469 Associated with SR-IOV VF: No 00:08:23.469 Max Data Transfer Size: 524288 00:08:23.469 Max Number of Namespaces: 256 00:08:23.469 Max Number of I/O Queues: 64 00:08:23.469 NVMe Specification Version (VS): 1.4 00:08:23.469 NVMe Specification Version (Identify): 1.4 00:08:23.469 Maximum Queue Entries: 2048 00:08:23.469 Contiguous Queues Required: Yes 00:08:23.469 Arbitration Mechanisms Supported 00:08:23.469 Weighted Round Robin: Not Supported 00:08:23.469 Vendor Specific: Not Supported 00:08:23.469 Reset Timeout: 7500 ms 00:08:23.469 Doorbell Stride: 4 bytes 00:08:23.469 NVM Subsystem Reset: Not Supported 00:08:23.469 Command Sets Supported 00:08:23.469 NVM Command Set: Supported 00:08:23.469 Boot Partition: Not Supported 00:08:23.469 Memory Page Size Minimum: 4096 bytes 00:08:23.469 Memory Page Size Maximum: 65536 bytes 00:08:23.469 Persistent Memory Region: Not Supported 00:08:23.469 Optional Asynchronous Events Supported 00:08:23.469 Namespace Attribute Notices: Supported 00:08:23.469 Firmware Activation Notices: Not Supported 00:08:23.469 ANA Change Notices: Not Supported 00:08:23.469 PLE Aggregate Log Change Notices: Not Supported 00:08:23.469 LBA Status Info Alert Notices: Not Supported 00:08:23.469 EGE Aggregate Log Change Notices: Not Supported 00:08:23.469 Normal NVM Subsystem Shutdown event: Not Supported 00:08:23.469 Zone Descriptor Change Notices: Not Supported 00:08:23.469 Discovery Log Change Notices: Not Supported 00:08:23.470 Controller Attributes 00:08:23.470 128-bit Host Identifier: Not Supported 00:08:23.470 Non-Operational Permissive Mode: Not Supported 00:08:23.470 NVM Sets: Not Supported 00:08:23.470 Read Recovery Levels: Not Supported 00:08:23.470 Endurance Groups: Not Supported 00:08:23.470 Predictable Latency Mode: Not Supported 00:08:23.470 Traffic Based Keep ALive: Not Supported 00:08:23.470 Namespace Granularity: Not Supported 00:08:23.470 SQ Associations: Not Supported 00:08:23.470 UUID List: Not Supported 00:08:23.470 Multi-Domain Subsystem: Not Supported 00:08:23.470 Fixed Capacity Management: Not Supported 00:08:23.470 Variable Capacity Management: Not Supported 00:08:23.470 Delete Endurance Group: Not Supported 00:08:23.470 Delete NVM Set: Not Supported 00:08:23.470 Extended LBA Formats Supported: Supported 00:08:23.470 Flexible Data Placement Supported: Not Supported 00:08:23.470 00:08:23.470 Controller Memory Buffer Support 00:08:23.470 ================================ 00:08:23.470 Supported: No 00:08:23.470 00:08:23.470 Persistent Memory Region Support 00:08:23.470 ================================ 00:08:23.470 Supported: No 00:08:23.470 00:08:23.470 Admin Command Set Attributes 00:08:23.470 ============================ 00:08:23.470 Security Send/Receive: Not Supported 00:08:23.470 Format NVM: Supported 00:08:23.470 Firmware Activate/Download: Not Supported 00:08:23.470 Namespace Management: Supported 00:08:23.470 Device Self-Test: Not Supported 00:08:23.470 Directives: Supported 00:08:23.470 NVMe-MI: Not Supported 00:08:23.470 Virtualization Management: Not Supported 00:08:23.470 Doorbell Buffer Config: Supported 00:08:23.470 Get LBA Status Capability: Not Supported 00:08:23.470 Command & Feature Lockdown Capability: Not Supported 00:08:23.470 Abort Command Limit: 4 00:08:23.470 Async Event Request Limit: 4 00:08:23.470 Number of Firmware Slots: N/A 00:08:23.470 Firmware Slot 1 Read-Only: N/A 00:08:23.470 Firmware Activation Without Reset: N/A 00:08:23.470 Multiple Update Detection Support: N/A 00:08:23.470 Firmware Update Granularity: No Information Provided 00:08:23.470 Per-Namespace SMART Log: Yes 00:08:23.470 Asymmetric Namespace Access Log Page: Not Supported 00:08:23.470 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:23.470 Command Effects Log Page: Supported 00:08:23.470 Get Log Page Extended Data: Supported 00:08:23.470 Telemetry Log Pages: Not Supported 00:08:23.470 Persistent Event Log Pages: Not Supported 00:08:23.470 Supported Log Pages Log Page: May Support 00:08:23.470 Commands Supported & Effects Log Page: Not Supported 00:08:23.470 Feature Identifiers & Effects Log Page:May Support 00:08:23.470 NVMe-MI Commands & Effects Log Page: May Support 00:08:23.470 Data Area 4 for Telemetry Log: Not Supported 00:08:23.470 Error Log Page Entries Supported: 1 00:08:23.470 Keep Alive: Not Supported 00:08:23.470 00:08:23.470 NVM Command Set Attributes 00:08:23.470 ========================== 00:08:23.470 Submission Queue Entry Size 00:08:23.470 Max: 64 00:08:23.470 Min: 64 00:08:23.470 Completion Queue Entry Size 00:08:23.470 Max: 16 00:08:23.470 Min: 16 00:08:23.470 Number of Namespaces: 256 00:08:23.470 Compare Command: Supported 00:08:23.470 Write Uncorrectable Command: Not Supported 00:08:23.470 Dataset Management Command: Supported 00:08:23.470 Write Zeroes Command: Supported 00:08:23.470 Set Features Save Field: Supported 00:08:23.470 Reservations: Not Supported 00:08:23.470 Timestamp: Supported 00:08:23.470 Copy: Supported 00:08:23.470 Volatile Write Cache: Present 00:08:23.470 Atomic Write Unit (Normal): 1 00:08:23.470 Atomic Write Unit (PFail): 1 00:08:23.470 Atomic Compare & Write Unit: 1 00:08:23.470 Fused Compare & Write: Not Supported 00:08:23.470 Scatter-Gather List 00:08:23.470 SGL Command Set: Supported 00:08:23.470 SGL Keyed: Not Supported 00:08:23.470 SGL Bit Bucket Descriptor: Not Supported 00:08:23.470 SGL Metadata Pointer: Not Supported 00:08:23.470 Oversized SGL: Not Supported 00:08:23.470 SGL Metadata Address: Not Supported 00:08:23.470 SGL Offset: Not Supported 00:08:23.470 Transport SGL Data Block: Not Supported 00:08:23.470 Replay Protected Memory Block: Not Supported 00:08:23.470 00:08:23.470 Firmware Slot Information 00:08:23.470 ========================= 00:08:23.470 Active slot: 1 00:08:23.470 Slot 1 Firmware Revision: 1.0 00:08:23.470 00:08:23.470 00:08:23.470 Commands Supported and Effects 00:08:23.470 ============================== 00:08:23.470 Admin Commands 00:08:23.470 -------------- 00:08:23.470 Delete I/O Submission Queue (00h): Supported 00:08:23.470 Create I/O Submission Queue (01h): Supported 00:08:23.470 Get Log Page (02h): Supported 00:08:23.470 Delete I/O Completion Queue (04h): Supported 00:08:23.470 Create I/O Completion Queue (05h): Supported 00:08:23.470 Identify (06h): Supported 00:08:23.470 Abort (08h): Supported 00:08:23.470 Set Features (09h): Supported 00:08:23.470 Get Features (0Ah): Supported 00:08:23.470 Asynchronous Event Request (0Ch): Supported 00:08:23.470 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:23.470 Directive Send (19h): Supported 00:08:23.470 Directive Receive (1Ah): Supported 00:08:23.470 Virtualization Management (1Ch): Supported 00:08:23.470 Doorbell Buffer Config (7Ch): Supported 00:08:23.470 Format NVM (80h): Supported LBA-Change 00:08:23.470 I/O Commands 00:08:23.470 ------------ 00:08:23.470 Flush (00h): Supported LBA-Change 00:08:23.470 Write (01h): Supported LBA-Change 00:08:23.470 Read (02h): Supported 00:08:23.470 Compare (05h): Supported 00:08:23.470 Write Zeroes (08h): Supported LBA-Change 00:08:23.470 Dataset Management (09h): Supported LBA-Change 00:08:23.470 Unknown (0Ch): Supported 00:08:23.470 Unknown (12h): Supported 00:08:23.470 Copy (19h): Supported LBA-Change 00:08:23.470 Unknown (1Dh): Supported LBA-Change 00:08:23.470 00:08:23.470 Error Log 00:08:23.470 ========= 00:08:23.470 00:08:23.470 Arbitration 00:08:23.470 =========== 00:08:23.470 Arbitration Burst: no limit 00:08:23.470 00:08:23.470 Power Management 00:08:23.470 ================ 00:08:23.470 Number of Power States: 1 00:08:23.470 Current Power State: Power State #0 00:08:23.470 Power State #0: 00:08:23.470 Max Power: 25.00 W 00:08:23.470 Non-Operational State: Operational 00:08:23.470 Entry Latency: 16 microseconds 00:08:23.470 Exit Latency: 4 microseconds 00:08:23.470 Relative Read Throughput: 0 00:08:23.470 Relative Read Latency: 0 00:08:23.470 Relative Write Throughput: 0 00:08:23.470 Relative Write Latency: 0 00:08:23.470 Idle Power: Not Reported 00:08:23.470 Active Power: Not Reported 00:08:23.470 Non-Operational Permissive Mode: Not Supported 00:08:23.470 00:08:23.470 Health Information 00:08:23.470 ================== 00:08:23.470 Critical Warnings: 00:08:23.470 Available Spare Space: OK 00:08:23.470 Temperature: OK 00:08:23.470 Device Reliability: OK 00:08:23.470 Read Only: No 00:08:23.470 Volatile Memory Backup: OK 00:08:23.470 Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.470 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:23.470 Available Spare: 0% 00:08:23.470 Available Spare Threshold: 0% 00:08:23.470 Life Percentage Used: 0% 00:08:23.470 Data Units Read: 954 00:08:23.470 Data Units Written: 821 00:08:23.470 Host Read Commands: 49208 00:08:23.470 Host Write Commands: 48004 00:08:23.470 Controller Busy Time: 0 minutes 00:08:23.470 Power Cycles: 0 00:08:23.470 Power On Hours: 0 hours 00:08:23.470 Unsafe Shutdowns: 0 00:08:23.470 Unrecoverable Media Errors: 0 00:08:23.470 Lifetime Error Log Entries: 0 00:08:23.470 Warning Temperature Time: 0 minutes 00:08:23.470 Critical Temperature Time: 0 minutes 00:08:23.470 00:08:23.470 Number of Queues 00:08:23.470 ================ 00:08:23.470 Number of I/O Submission Queues: 64 00:08:23.470 Number of I/O Completion Queues: 64 00:08:23.470 00:08:23.470 ZNS Specific Controller Data 00:08:23.470 ============================ 00:08:23.470 Zone Append Size Limit: 0 00:08:23.470 00:08:23.470 00:08:23.470 Active Namespaces 00:08:23.470 ================= 00:08:23.470 Namespace ID:1 00:08:23.470 Error Recovery Timeout: Unlimited 00:08:23.470 Command Set Identifier: NVM (00h) 00:08:23.470 Deallocate: Supported 00:08:23.470 Deallocated/Unwritten Error: Supported 00:08:23.470 Deallocated Read Value: All 0x00 00:08:23.470 Deallocate in Write Zeroes: Not Supported 00:08:23.470 Deallocated Guard Field: 0xFFFF 00:08:23.470 Flush: Supported 00:08:23.470 Reservation: Not Supported 00:08:23.470 Namespace Sharing Capabilities: Private 00:08:23.470 Size (in LBAs): 1310720 (5GiB) 00:08:23.471 Capacity (in LBAs): 1310720 (5GiB) 00:08:23.471 Utilization (in LBAs): 1310720 (5GiB) 00:08:23.471 Thin Provisioning: Not Supported 00:08:23.471 Per-NS Atomic Units: No 00:08:23.471 Maximum Single Source Range Length: 128 00:08:23.471 Maximum Copy Length: 128 00:08:23.471 Maximum Source Range Count: 128 00:08:23.471 NGUID/EUI64 Never Reused: No 00:08:23.471 Namespace Write Protected: No 00:08:23.471 Number of LBA Formats: 8 00:08:23.471 Current LBA Format: LBA Format #04 00:08:23.471 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.471 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.471 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.471 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.471 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.471 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.471 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.471 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.471 00:08:23.471 NVM Specific Namespace Data 00:08:23.471 =========================== 00:08:23.471 Logical Block Storage Tag Mask: 0 00:08:23.471 Protection Information Capabilities: 00:08:23.471 16b Guard Protection Information Storage Tag Support: No 00:08:23.471 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.471 Storage Tag Check Read Support: No 00:08:23.471 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.471 ===================================================== 00:08:23.471 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.471 ===================================================== 00:08:23.471 Controller Capabilities/Features 00:08:23.471 ================================ 00:08:23.471 Vendor ID: 1b36 00:08:23.471 Subsystem Vendor ID: 1af4 00:08:23.471 Serial Number: 12343 00:08:23.471 Model Number: QEMU NVMe Ctrl 00:08:23.471 Firmware Version: 8.0.0 00:08:23.471 Recommended Arb Burst: 6 00:08:23.471 IEEE OUI Identifier: 00 54 52 00:08:23.471 Mul[2024-11-18 13:21:19.568115] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74778 terminated unexpected 00:08:23.471 [2024-11-18 13:21:19.569957] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74778 terminated unexpected 00:08:23.471 [2024-11-18 13:21:19.571591] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74778 terminated unexpected 00:08:23.471 ti-path I/O 00:08:23.471 May have multiple subsystem ports: No 00:08:23.471 May have multiple controllers: Yes 00:08:23.471 Associated with SR-IOV VF: No 00:08:23.471 Max Data Transfer Size: 524288 00:08:23.471 Max Number of Namespaces: 256 00:08:23.471 Max Number of I/O Queues: 64 00:08:23.471 NVMe Specification Version (VS): 1.4 00:08:23.471 NVMe Specification Version (Identify): 1.4 00:08:23.471 Maximum Queue Entries: 2048 00:08:23.471 Contiguous Queues Required: Yes 00:08:23.471 Arbitration Mechanisms Supported 00:08:23.471 Weighted Round Robin: Not Supported 00:08:23.471 Vendor Specific: Not Supported 00:08:23.471 Reset Timeout: 7500 ms 00:08:23.471 Doorbell Stride: 4 bytes 00:08:23.471 NVM Subsystem Reset: Not Supported 00:08:23.471 Command Sets Supported 00:08:23.471 NVM Command Set: Supported 00:08:23.471 Boot Partition: Not Supported 00:08:23.471 Memory Page Size Minimum: 4096 bytes 00:08:23.471 Memory Page Size Maximum: 65536 bytes 00:08:23.471 Persistent Memory Region: Not Supported 00:08:23.471 Optional Asynchronous Events Supported 00:08:23.471 Namespace Attribute Notices: Supported 00:08:23.471 Firmware Activation Notices: Not Supported 00:08:23.471 ANA Change Notices: Not Supported 00:08:23.471 PLE Aggregate Log Change Notices: Not Supported 00:08:23.471 LBA Status Info Alert Notices: Not Supported 00:08:23.471 EGE Aggregate Log Change Notices: Not Supported 00:08:23.471 Normal NVM Subsystem Shutdown event: Not Supported 00:08:23.471 Zone Descriptor Change Notices: Not Supported 00:08:23.471 Discovery Log Change Notices: Not Supported 00:08:23.471 Controller Attributes 00:08:23.471 128-bit Host Identifier: Not Supported 00:08:23.471 Non-Operational Permissive Mode: Not Supported 00:08:23.471 NVM Sets: Not Supported 00:08:23.471 Read Recovery Levels: Not Supported 00:08:23.471 Endurance Groups: Supported 00:08:23.471 Predictable Latency Mode: Not Supported 00:08:23.471 Traffic Based Keep ALive: Not Supported 00:08:23.471 Namespace Granularity: Not Supported 00:08:23.471 SQ Associations: Not Supported 00:08:23.471 UUID List: Not Supported 00:08:23.471 Multi-Domain Subsystem: Not Supported 00:08:23.471 Fixed Capacity Management: Not Supported 00:08:23.471 Variable Capacity Management: Not Supported 00:08:23.471 Delete Endurance Group: Not Supported 00:08:23.471 Delete NVM Set: Not Supported 00:08:23.471 Extended LBA Formats Supported: Supported 00:08:23.471 Flexible Data Placement Supported: Supported 00:08:23.471 00:08:23.471 Controller Memory Buffer Support 00:08:23.471 ================================ 00:08:23.471 Supported: No 00:08:23.471 00:08:23.471 Persistent Memory Region Support 00:08:23.471 ================================ 00:08:23.471 Supported: No 00:08:23.471 00:08:23.471 Admin Command Set Attributes 00:08:23.471 ============================ 00:08:23.471 Security Send/Receive: Not Supported 00:08:23.471 Format NVM: Supported 00:08:23.471 Firmware Activate/Download: Not Supported 00:08:23.471 Namespace Management: Supported 00:08:23.471 Device Self-Test: Not Supported 00:08:23.471 Directives: Supported 00:08:23.471 NVMe-MI: Not Supported 00:08:23.471 Virtualization Management: Not Supported 00:08:23.471 Doorbell Buffer Config: Supported 00:08:23.471 Get LBA Status Capability: Not Supported 00:08:23.471 Command & Feature Lockdown Capability: Not Supported 00:08:23.471 Abort Command Limit: 4 00:08:23.471 Async Event Request Limit: 4 00:08:23.471 Number of Firmware Slots: N/A 00:08:23.471 Firmware Slot 1 Read-Only: N/A 00:08:23.471 Firmware Activation Without Reset: N/A 00:08:23.471 Multiple Update Detection Support: N/A 00:08:23.471 Firmware Update Granularity: No Information Provided 00:08:23.471 Per-Namespace SMART Log: Yes 00:08:23.471 Asymmetric Namespace Access Log Page: Not Supported 00:08:23.471 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:23.471 Command Effects Log Page: Supported 00:08:23.471 Get Log Page Extended Data: Supported 00:08:23.471 Telemetry Log Pages: Not Supported 00:08:23.471 Persistent Event Log Pages: Not Supported 00:08:23.471 Supported Log Pages Log Page: May Support 00:08:23.471 Commands Supported & Effects Log Page: Not Supported 00:08:23.471 Feature Identifiers & Effects Log Page:May Support 00:08:23.471 NVMe-MI Commands & Effects Log Page: May Support 00:08:23.471 Data Area 4 for Telemetry Log: Not Supported 00:08:23.471 Error Log Page Entries Supported: 1 00:08:23.471 Keep Alive: Not Supported 00:08:23.471 00:08:23.471 NVM Command Set Attributes 00:08:23.471 ========================== 00:08:23.471 Submission Queue Entry Size 00:08:23.471 Max: 64 00:08:23.471 Min: 64 00:08:23.471 Completion Queue Entry Size 00:08:23.471 Max: 16 00:08:23.471 Min: 16 00:08:23.471 Number of Namespaces: 256 00:08:23.471 Compare Command: Supported 00:08:23.471 Write Uncorrectable Command: Not Supported 00:08:23.471 Dataset Management Command: Supported 00:08:23.471 Write Zeroes Command: Supported 00:08:23.471 Set Features Save Field: Supported 00:08:23.471 Reservations: Not Supported 00:08:23.471 Timestamp: Supported 00:08:23.471 Copy: Supported 00:08:23.471 Volatile Write Cache: Present 00:08:23.471 Atomic Write Unit (Normal): 1 00:08:23.471 Atomic Write Unit (PFail): 1 00:08:23.471 Atomic Compare & Write Unit: 1 00:08:23.471 Fused Compare & Write: Not Supported 00:08:23.471 Scatter-Gather List 00:08:23.471 SGL Command Set: Supported 00:08:23.471 SGL Keyed: Not Supported 00:08:23.471 SGL Bit Bucket Descriptor: Not Supported 00:08:23.471 SGL Metadata Pointer: Not Supported 00:08:23.471 Oversized SGL: Not Supported 00:08:23.471 SGL Metadata Address: Not Supported 00:08:23.471 SGL Offset: Not Supported 00:08:23.471 Transport SGL Data Block: Not Supported 00:08:23.471 Replay Protected Memory Block: Not Supported 00:08:23.471 00:08:23.471 Firmware Slot Information 00:08:23.471 ========================= 00:08:23.471 Active slot: 1 00:08:23.471 Slot 1 Firmware Revision: 1.0 00:08:23.471 00:08:23.471 00:08:23.471 Commands Supported and Effects 00:08:23.471 ============================== 00:08:23.471 Admin Commands 00:08:23.471 -------------- 00:08:23.472 Delete I/O Submission Queue (00h): Supported 00:08:23.472 Create I/O Submission Queue (01h): Supported 00:08:23.472 Get Log Page (02h): Supported 00:08:23.472 Delete I/O Completion Queue (04h): Supported 00:08:23.472 Create I/O Completion Queue (05h): Supported 00:08:23.472 Identify (06h): Supported 00:08:23.472 Abort (08h): Supported 00:08:23.472 Set Features (09h): Supported 00:08:23.472 Get Features (0Ah): Supported 00:08:23.472 Asynchronous Event Request (0Ch): Supported 00:08:23.472 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:23.472 Directive Send (19h): Supported 00:08:23.472 Directive Receive (1Ah): Supported 00:08:23.472 Virtualization Management (1Ch): Supported 00:08:23.472 Doorbell Buffer Config (7Ch): Supported 00:08:23.472 Format NVM (80h): Supported LBA-Change 00:08:23.472 I/O Commands 00:08:23.472 ------------ 00:08:23.472 Flush (00h): Supported LBA-Change 00:08:23.472 Write (01h): Supported LBA-Change 00:08:23.472 Read (02h): Supported 00:08:23.472 Compare (05h): Supported 00:08:23.472 Write Zeroes (08h): Supported LBA-Change 00:08:23.472 Dataset Management (09h): Supported LBA-Change 00:08:23.472 Unknown (0Ch): Supported 00:08:23.472 Unknown (12h): Supported 00:08:23.472 Copy (19h): Supported LBA-Change 00:08:23.472 Unknown (1Dh): Supported LBA-Change 00:08:23.472 00:08:23.472 Error Log 00:08:23.472 ========= 00:08:23.472 00:08:23.472 Arbitration 00:08:23.472 =========== 00:08:23.472 Arbitration Burst: no limit 00:08:23.472 00:08:23.472 Power Management 00:08:23.472 ================ 00:08:23.472 Number of Power States: 1 00:08:23.472 Current Power State: Power State #0 00:08:23.472 Power State #0: 00:08:23.472 Max Power: 25.00 W 00:08:23.472 Non-Operational State: Operational 00:08:23.472 Entry Latency: 16 microseconds 00:08:23.472 Exit Latency: 4 microseconds 00:08:23.472 Relative Read Throughput: 0 00:08:23.472 Relative Read Latency: 0 00:08:23.472 Relative Write Throughput: 0 00:08:23.472 Relative Write Latency: 0 00:08:23.472 Idle Power: Not Reported 00:08:23.472 Active Power: Not Reported 00:08:23.472 Non-Operational Permissive Mode: Not Supported 00:08:23.472 00:08:23.472 Health Information 00:08:23.472 ================== 00:08:23.472 Critical Warnings: 00:08:23.472 Available Spare Space: OK 00:08:23.472 Temperature: OK 00:08:23.472 Device Reliability: OK 00:08:23.472 Read Only: No 00:08:23.472 Volatile Memory Backup: OK 00:08:23.472 Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.472 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:23.472 Available Spare: 0% 00:08:23.472 Available Spare Threshold: 0% 00:08:23.472 Life Percentage Used: 0% 00:08:23.472 Data Units Read: 750 00:08:23.472 Data Units Written: 679 00:08:23.472 Host Read Commands: 34433 00:08:23.472 Host Write Commands: 33856 00:08:23.472 Controller Busy Time: 0 minutes 00:08:23.472 Power Cycles: 0 00:08:23.472 Power On Hours: 0 hours 00:08:23.472 Unsafe Shutdowns: 0 00:08:23.472 Unrecoverable Media Errors: 0 00:08:23.472 Lifetime Error Log Entries: 0 00:08:23.472 Warning Temperature Time: 0 minutes 00:08:23.472 Critical Temperature Time: 0 minutes 00:08:23.472 00:08:23.472 Number of Queues 00:08:23.472 ================ 00:08:23.472 Number of I/O Submission Queues: 64 00:08:23.472 Number of I/O Completion Queues: 64 00:08:23.472 00:08:23.472 ZNS Specific Controller Data 00:08:23.472 ============================ 00:08:23.472 Zone Append Size Limit: 0 00:08:23.472 00:08:23.472 00:08:23.472 Active Namespaces 00:08:23.472 ================= 00:08:23.472 Namespace ID:1 00:08:23.472 Error Recovery Timeout: Unlimited 00:08:23.472 Command Set Identifier: NVM (00h) 00:08:23.472 Deallocate: Supported 00:08:23.472 Deallocated/Unwritten Error: Supported 00:08:23.472 Deallocated Read Value: All 0x00 00:08:23.472 Deallocate in Write Zeroes: Not Supported 00:08:23.472 Deallocated Guard Field: 0xFFFF 00:08:23.472 Flush: Supported 00:08:23.472 Reservation: Not Supported 00:08:23.472 Namespace Sharing Capabilities: Multiple Controllers 00:08:23.472 Size (in LBAs): 262144 (1GiB) 00:08:23.472 Capacity (in LBAs): 262144 (1GiB) 00:08:23.472 Utilization (in LBAs): 262144 (1GiB) 00:08:23.472 Thin Provisioning: Not Supported 00:08:23.472 Per-NS Atomic Units: No 00:08:23.472 Maximum Single Source Range Length: 128 00:08:23.472 Maximum Copy Length: 128 00:08:23.472 Maximum Source Range Count: 128 00:08:23.472 NGUID/EUI64 Never Reused: No 00:08:23.472 Namespace Write Protected: No 00:08:23.472 Endurance group ID: 1 00:08:23.472 Number of LBA Formats: 8 00:08:23.472 Current LBA Format: LBA Format #04 00:08:23.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.472 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.472 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.472 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.472 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.472 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.472 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.472 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.472 00:08:23.472 Get Feature FDP: 00:08:23.472 ================ 00:08:23.472 Enabled: Yes 00:08:23.472 FDP configuration index: 0 00:08:23.472 00:08:23.472 FDP configurations log page 00:08:23.472 =========================== 00:08:23.472 Number of FDP configurations: 1 00:08:23.472 Version: 0 00:08:23.472 Size: 112 00:08:23.472 FDP Configuration Descriptor: 0 00:08:23.472 Descriptor Size: 96 00:08:23.472 Reclaim Group Identifier format: 2 00:08:23.472 FDP Volatile Write Cache: Not Present 00:08:23.472 FDP Configuration: Valid 00:08:23.472 Vendor Specific Size: 0 00:08:23.472 Number of Reclaim Groups: 2 00:08:23.472 Number of Recalim Unit Handles: 8 00:08:23.472 Max Placement Identifiers: 128 00:08:23.472 Number of Namespaces Suppprted: 256 00:08:23.472 Reclaim unit Nominal Size: 6000000 bytes 00:08:23.472 Estimated Reclaim Unit Time Limit: Not Reported 00:08:23.472 RUH Desc #000: RUH Type: Initially Isolated 00:08:23.472 RUH Desc #001: RUH Type: Initially Isolated 00:08:23.472 RUH Desc #002: RUH Type: Initially Isolated 00:08:23.472 RUH Desc #003: RUH Type: Initially Isolated 00:08:23.472 RUH Desc #004: RUH Type: Initially Isolated 00:08:23.472 RUH Desc #005: RUH Type: Initially Isolated 00:08:23.472 RUH Desc #006: RUH Type: Initially Isolated 00:08:23.472 RUH Desc #007: RUH Type: Initially Isolated 00:08:23.472 00:08:23.472 FDP reclaim unit handle usage log page 00:08:23.472 ====================================== 00:08:23.472 Number of Reclaim Unit Handles: 8 00:08:23.472 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:23.472 RUH Usage Desc #001: RUH Attributes: Unused 00:08:23.472 RUH Usage Desc #002: RUH Attributes: Unused 00:08:23.472 RUH Usage Desc #003: RUH Attributes: Unused 00:08:23.472 RUH Usage Desc #004: RUH Attributes: Unused 00:08:23.472 RUH Usage Desc #005: RUH Attributes: Unused 00:08:23.472 RUH Usage Desc #006: RUH Attributes: Unused 00:08:23.472 RUH Usage Desc #007: RUH Attributes: Unused 00:08:23.472 00:08:23.472 FDP statistics log page 00:08:23.472 ======================= 00:08:23.472 Host bytes with metadata written: 419012608 00:08:23.472 Media[2024-11-18 13:21:19.574691] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74778 terminated unexpected 00:08:23.472 bytes with metadata written: 419057664 00:08:23.472 Media bytes erased: 0 00:08:23.472 00:08:23.472 FDP events log page 00:08:23.472 =================== 00:08:23.472 Number of FDP events: 0 00:08:23.472 00:08:23.472 NVM Specific Namespace Data 00:08:23.472 =========================== 00:08:23.472 Logical Block Storage Tag Mask: 0 00:08:23.472 Protection Information Capabilities: 00:08:23.472 16b Guard Protection Information Storage Tag Support: No 00:08:23.472 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.472 Storage Tag Check Read Support: No 00:08:23.472 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.472 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.472 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.472 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.472 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.472 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.472 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.473 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.473 ===================================================== 00:08:23.473 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.473 ===================================================== 00:08:23.473 Controller Capabilities/Features 00:08:23.473 ================================ 00:08:23.473 Vendor ID: 1b36 00:08:23.473 Subsystem Vendor ID: 1af4 00:08:23.473 Serial Number: 12342 00:08:23.473 Model Number: QEMU NVMe Ctrl 00:08:23.473 Firmware Version: 8.0.0 00:08:23.473 Recommended Arb Burst: 6 00:08:23.473 IEEE OUI Identifier: 00 54 52 00:08:23.473 Multi-path I/O 00:08:23.473 May have multiple subsystem ports: No 00:08:23.473 May have multiple controllers: No 00:08:23.473 Associated with SR-IOV VF: No 00:08:23.473 Max Data Transfer Size: 524288 00:08:23.473 Max Number of Namespaces: 256 00:08:23.473 Max Number of I/O Queues: 64 00:08:23.473 NVMe Specification Version (VS): 1.4 00:08:23.473 NVMe Specification Version (Identify): 1.4 00:08:23.473 Maximum Queue Entries: 2048 00:08:23.473 Contiguous Queues Required: Yes 00:08:23.473 Arbitration Mechanisms Supported 00:08:23.473 Weighted Round Robin: Not Supported 00:08:23.473 Vendor Specific: Not Supported 00:08:23.473 Reset Timeout: 7500 ms 00:08:23.473 Doorbell Stride: 4 bytes 00:08:23.473 NVM Subsystem Reset: Not Supported 00:08:23.473 Command Sets Supported 00:08:23.473 NVM Command Set: Supported 00:08:23.473 Boot Partition: Not Supported 00:08:23.473 Memory Page Size Minimum: 4096 bytes 00:08:23.473 Memory Page Size Maximum: 65536 bytes 00:08:23.473 Persistent Memory Region: Not Supported 00:08:23.473 Optional Asynchronous Events Supported 00:08:23.473 Namespace Attribute Notices: Supported 00:08:23.473 Firmware Activation Notices: Not Supported 00:08:23.473 ANA Change Notices: Not Supported 00:08:23.473 PLE Aggregate Log Change Notices: Not Supported 00:08:23.473 LBA Status Info Alert Notices: Not Supported 00:08:23.473 EGE Aggregate Log Change Notices: Not Supported 00:08:23.473 Normal NVM Subsystem Shutdown event: Not Supported 00:08:23.473 Zone Descriptor Change Notices: Not Supported 00:08:23.473 Discovery Log Change Notices: Not Supported 00:08:23.473 Controller Attributes 00:08:23.473 128-bit Host Identifier: Not Supported 00:08:23.473 Non-Operational Permissive Mode: Not Supported 00:08:23.473 NVM Sets: Not Supported 00:08:23.473 Read Recovery Levels: Not Supported 00:08:23.473 Endurance Groups: Not Supported 00:08:23.473 Predictable Latency Mode: Not Supported 00:08:23.473 Traffic Based Keep ALive: Not Supported 00:08:23.473 Namespace Granularity: Not Supported 00:08:23.473 SQ Associations: Not Supported 00:08:23.473 UUID List: Not Supported 00:08:23.473 Multi-Domain Subsystem: Not Supported 00:08:23.473 Fixed Capacity Management: Not Supported 00:08:23.473 Variable Capacity Management: Not Supported 00:08:23.473 Delete Endurance Group: Not Supported 00:08:23.473 Delete NVM Set: Not Supported 00:08:23.473 Extended LBA Formats Supported: Supported 00:08:23.473 Flexible Data Placement Supported: Not Supported 00:08:23.473 00:08:23.473 Controller Memory Buffer Support 00:08:23.473 ================================ 00:08:23.473 Supported: No 00:08:23.473 00:08:23.473 Persistent Memory Region Support 00:08:23.473 ================================ 00:08:23.473 Supported: No 00:08:23.473 00:08:23.473 Admin Command Set Attributes 00:08:23.473 ============================ 00:08:23.473 Security Send/Receive: Not Supported 00:08:23.473 Format NVM: Supported 00:08:23.473 Firmware Activate/Download: Not Supported 00:08:23.473 Namespace Management: Supported 00:08:23.473 Device Self-Test: Not Supported 00:08:23.473 Directives: Supported 00:08:23.473 NVMe-MI: Not Supported 00:08:23.473 Virtualization Management: Not Supported 00:08:23.473 Doorbell Buffer Config: Supported 00:08:23.473 Get LBA Status Capability: Not Supported 00:08:23.473 Command & Feature Lockdown Capability: Not Supported 00:08:23.473 Abort Command Limit: 4 00:08:23.473 Async Event Request Limit: 4 00:08:23.473 Number of Firmware Slots: N/A 00:08:23.473 Firmware Slot 1 Read-Only: N/A 00:08:23.473 Firmware Activation Without Reset: N/A 00:08:23.473 Multiple Update Detection Support: N/A 00:08:23.473 Firmware Update Granularity: No Information Provided 00:08:23.473 Per-Namespace SMART Log: Yes 00:08:23.473 Asymmetric Namespace Access Log Page: Not Supported 00:08:23.473 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:23.473 Command Effects Log Page: Supported 00:08:23.473 Get Log Page Extended Data: Supported 00:08:23.473 Telemetry Log Pages: Not Supported 00:08:23.473 Persistent Event Log Pages: Not Supported 00:08:23.473 Supported Log Pages Log Page: May Support 00:08:23.473 Commands Supported & Effects Log Page: Not Supported 00:08:23.473 Feature Identifiers & Effects Log Page:May Support 00:08:23.473 NVMe-MI Commands & Effects Log Page: May Support 00:08:23.473 Data Area 4 for Telemetry Log: Not Supported 00:08:23.473 Error Log Page Entries Supported: 1 00:08:23.473 Keep Alive: Not Supported 00:08:23.473 00:08:23.473 NVM Command Set Attributes 00:08:23.473 ========================== 00:08:23.473 Submission Queue Entry Size 00:08:23.473 Max: 64 00:08:23.473 Min: 64 00:08:23.473 Completion Queue Entry Size 00:08:23.473 Max: 16 00:08:23.473 Min: 16 00:08:23.473 Number of Namespaces: 256 00:08:23.473 Compare Command: Supported 00:08:23.473 Write Uncorrectable Command: Not Supported 00:08:23.473 Dataset Management Command: Supported 00:08:23.473 Write Zeroes Command: Supported 00:08:23.473 Set Features Save Field: Supported 00:08:23.473 Reservations: Not Supported 00:08:23.473 Timestamp: Supported 00:08:23.473 Copy: Supported 00:08:23.473 Volatile Write Cache: Present 00:08:23.473 Atomic Write Unit (Normal): 1 00:08:23.473 Atomic Write Unit (PFail): 1 00:08:23.473 Atomic Compare & Write Unit: 1 00:08:23.473 Fused Compare & Write: Not Supported 00:08:23.473 Scatter-Gather List 00:08:23.473 SGL Command Set: Supported 00:08:23.473 SGL Keyed: Not Supported 00:08:23.473 SGL Bit Bucket Descriptor: Not Supported 00:08:23.473 SGL Metadata Pointer: Not Supported 00:08:23.473 Oversized SGL: Not Supported 00:08:23.473 SGL Metadata Address: Not Supported 00:08:23.473 SGL Offset: Not Supported 00:08:23.473 Transport SGL Data Block: Not Supported 00:08:23.473 Replay Protected Memory Block: Not Supported 00:08:23.473 00:08:23.473 Firmware Slot Information 00:08:23.473 ========================= 00:08:23.473 Active slot: 1 00:08:23.473 Slot 1 Firmware Revision: 1.0 00:08:23.473 00:08:23.473 00:08:23.473 Commands Supported and Effects 00:08:23.473 ============================== 00:08:23.473 Admin Commands 00:08:23.473 -------------- 00:08:23.473 Delete I/O Submission Queue (00h): Supported 00:08:23.473 Create I/O Submission Queue (01h): Supported 00:08:23.473 Get Log Page (02h): Supported 00:08:23.473 Delete I/O Completion Queue (04h): Supported 00:08:23.473 Create I/O Completion Queue (05h): Supported 00:08:23.473 Identify (06h): Supported 00:08:23.473 Abort (08h): Supported 00:08:23.473 Set Features (09h): Supported 00:08:23.473 Get Features (0Ah): Supported 00:08:23.473 Asynchronous Event Request (0Ch): Supported 00:08:23.473 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:23.473 Directive Send (19h): Supported 00:08:23.473 Directive Receive (1Ah): Supported 00:08:23.473 Virtualization Management (1Ch): Supported 00:08:23.473 Doorbell Buffer Config (7Ch): Supported 00:08:23.473 Format NVM (80h): Supported LBA-Change 00:08:23.473 I/O Commands 00:08:23.473 ------------ 00:08:23.473 Flush (00h): Supported LBA-Change 00:08:23.473 Write (01h): Supported LBA-Change 00:08:23.473 Read (02h): Supported 00:08:23.473 Compare (05h): Supported 00:08:23.473 Write Zeroes (08h): Supported LBA-Change 00:08:23.473 Dataset Management (09h): Supported LBA-Change 00:08:23.473 Unknown (0Ch): Supported 00:08:23.474 Unknown (12h): Supported 00:08:23.474 Copy (19h): Supported LBA-Change 00:08:23.474 Unknown (1Dh): Supported LBA-Change 00:08:23.474 00:08:23.474 Error Log 00:08:23.474 ========= 00:08:23.474 00:08:23.474 Arbitration 00:08:23.474 =========== 00:08:23.474 Arbitration Burst: no limit 00:08:23.474 00:08:23.474 Power Management 00:08:23.474 ================ 00:08:23.474 Number of Power States: 1 00:08:23.474 Current Power State: Power State #0 00:08:23.474 Power State #0: 00:08:23.474 Max Power: 25.00 W 00:08:23.474 Non-Operational State: Operational 00:08:23.474 Entry Latency: 16 microseconds 00:08:23.474 Exit Latency: 4 microseconds 00:08:23.474 Relative Read Throughput: 0 00:08:23.474 Relative Read Latency: 0 00:08:23.474 Relative Write Throughput: 0 00:08:23.474 Relative Write Latency: 0 00:08:23.474 Idle Power: Not Reported 00:08:23.474 Active Power: Not Reported 00:08:23.474 Non-Operational Permissive Mode: Not Supported 00:08:23.474 00:08:23.474 Health Information 00:08:23.474 ================== 00:08:23.474 Critical Warnings: 00:08:23.474 Available Spare Space: OK 00:08:23.474 Temperature: OK 00:08:23.474 Device Reliability: OK 00:08:23.474 Read Only: No 00:08:23.474 Volatile Memory Backup: OK 00:08:23.474 Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.474 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:23.474 Available Spare: 0% 00:08:23.474 Available Spare Threshold: 0% 00:08:23.474 Life Percentage Used: 0% 00:08:23.474 Data Units Read: 2021 00:08:23.474 Data Units Written: 1808 00:08:23.474 Host Read Commands: 101470 00:08:23.474 Host Write Commands: 99739 00:08:23.474 Controller Busy Time: 0 minutes 00:08:23.474 Power Cycles: 0 00:08:23.474 Power On Hours: 0 hours 00:08:23.474 Unsafe Shutdowns: 0 00:08:23.474 Unrecoverable Media Errors: 0 00:08:23.474 Lifetime Error Log Entries: 0 00:08:23.474 Warning Temperature Time: 0 minutes 00:08:23.474 Critical Temperature Time: 0 minutes 00:08:23.474 00:08:23.474 Number of Queues 00:08:23.474 ================ 00:08:23.474 Number of I/O Submission Queues: 64 00:08:23.474 Number of I/O Completion Queues: 64 00:08:23.474 00:08:23.474 ZNS Specific Controller Data 00:08:23.474 ============================ 00:08:23.474 Zone Append Size Limit: 0 00:08:23.474 00:08:23.474 00:08:23.474 Active Namespaces 00:08:23.474 ================= 00:08:23.474 Namespace ID:1 00:08:23.474 Error Recovery Timeout: Unlimited 00:08:23.474 Command Set Identifier: NVM (00h) 00:08:23.474 Deallocate: Supported 00:08:23.474 Deallocated/Unwritten Error: Supported 00:08:23.474 Deallocated Read Value: All 0x00 00:08:23.474 Deallocate in Write Zeroes: Not Supported 00:08:23.474 Deallocated Guard Field: 0xFFFF 00:08:23.474 Flush: Supported 00:08:23.474 Reservation: Not Supported 00:08:23.474 Namespace Sharing Capabilities: Private 00:08:23.474 Size (in LBAs): 1048576 (4GiB) 00:08:23.474 Capacity (in LBAs): 1048576 (4GiB) 00:08:23.474 Utilization (in LBAs): 1048576 (4GiB) 00:08:23.474 Thin Provisioning: Not Supported 00:08:23.474 Per-NS Atomic Units: No 00:08:23.474 Maximum Single Source Range Length: 128 00:08:23.474 Maximum Copy Length: 128 00:08:23.474 Maximum Source Range Count: 128 00:08:23.474 NGUID/EUI64 Never Reused: No 00:08:23.474 Namespace Write Protected: No 00:08:23.474 Number of LBA Formats: 8 00:08:23.474 Current LBA Format: LBA Format #04 00:08:23.474 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.474 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.474 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.474 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.474 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.474 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.474 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.474 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.474 00:08:23.474 NVM Specific Namespace Data 00:08:23.474 =========================== 00:08:23.474 Logical Block Storage Tag Mask: 0 00:08:23.474 Protection Information Capabilities: 00:08:23.474 16b Guard Protection Information Storage Tag Support: No 00:08:23.474 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.474 Storage Tag Check Read Support: No 00:08:23.474 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Namespace ID:2 00:08:23.474 Error Recovery Timeout: Unlimited 00:08:23.474 Command Set Identifier: NVM (00h) 00:08:23.474 Deallocate: Supported 00:08:23.474 Deallocated/Unwritten Error: Supported 00:08:23.474 Deallocated Read Value: All 0x00 00:08:23.474 Deallocate in Write Zeroes: Not Supported 00:08:23.474 Deallocated Guard Field: 0xFFFF 00:08:23.474 Flush: Supported 00:08:23.474 Reservation: Not Supported 00:08:23.474 Namespace Sharing Capabilities: Private 00:08:23.474 Size (in LBAs): 1048576 (4GiB) 00:08:23.474 Capacity (in LBAs): 1048576 (4GiB) 00:08:23.474 Utilization (in LBAs): 1048576 (4GiB) 00:08:23.474 Thin Provisioning: Not Supported 00:08:23.474 Per-NS Atomic Units: No 00:08:23.474 Maximum Single Source Range Length: 128 00:08:23.474 Maximum Copy Length: 128 00:08:23.474 Maximum Source Range Count: 128 00:08:23.474 NGUID/EUI64 Never Reused: No 00:08:23.474 Namespace Write Protected: No 00:08:23.474 Number of LBA Formats: 8 00:08:23.474 Current LBA Format: LBA Format #04 00:08:23.474 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.474 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.474 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.474 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.474 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.474 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.474 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.474 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.474 00:08:23.474 NVM Specific Namespace Data 00:08:23.474 =========================== 00:08:23.474 Logical Block Storage Tag Mask: 0 00:08:23.474 Protection Information Capabilities: 00:08:23.474 16b Guard Protection Information Storage Tag Support: No 00:08:23.474 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.474 Storage Tag Check Read Support: No 00:08:23.474 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.474 Namespace ID:3 00:08:23.474 Error Recovery Timeout: Unlimited 00:08:23.474 Command Set Identifier: NVM (00h) 00:08:23.474 Deallocate: Supported 00:08:23.474 Deallocated/Unwritten Error: Supported 00:08:23.474 Deallocated Read Value: All 0x00 00:08:23.474 Deallocate in Write Zeroes: Not Supported 00:08:23.474 Deallocated Guard Field: 0xFFFF 00:08:23.474 Flush: Supported 00:08:23.474 Reservation: Not Supported 00:08:23.474 Namespace Sharing Capabilities: Private 00:08:23.474 Size (in LBAs): 1048576 (4GiB) 00:08:23.738 Capacity (in LBAs): 1048576 (4GiB) 00:08:23.738 Utilization (in LBAs): 1048576 (4GiB) 00:08:23.738 Thin Provisioning: Not Supported 00:08:23.738 Per-NS Atomic Units: No 00:08:23.738 Maximum Single Source Range Length: 128 00:08:23.738 Maximum Copy Length: 128 00:08:23.738 Maximum Source Range Count: 128 00:08:23.738 NGUID/EUI64 Never Reused: No 00:08:23.738 Namespace Write Protected: No 00:08:23.738 Number of LBA Formats: 8 00:08:23.738 Current LBA Format: LBA Format #04 00:08:23.738 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.738 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.738 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.738 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.738 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.738 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.738 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.738 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.738 00:08:23.738 NVM Specific Namespace Data 00:08:23.738 =========================== 00:08:23.738 Logical Block Storage Tag Mask: 0 00:08:23.738 Protection Information Capabilities: 00:08:23.738 16b Guard Protection Information Storage Tag Support: No 00:08:23.738 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.738 Storage Tag Check Read Support: No 00:08:23.738 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.738 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:23.738 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:23.738 ===================================================== 00:08:23.738 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.738 ===================================================== 00:08:23.738 Controller Capabilities/Features 00:08:23.738 ================================ 00:08:23.738 Vendor ID: 1b36 00:08:23.738 Subsystem Vendor ID: 1af4 00:08:23.738 Serial Number: 12340 00:08:23.738 Model Number: QEMU NVMe Ctrl 00:08:23.738 Firmware Version: 8.0.0 00:08:23.738 Recommended Arb Burst: 6 00:08:23.738 IEEE OUI Identifier: 00 54 52 00:08:23.738 Multi-path I/O 00:08:23.738 May have multiple subsystem ports: No 00:08:23.738 May have multiple controllers: No 00:08:23.738 Associated with SR-IOV VF: No 00:08:23.738 Max Data Transfer Size: 524288 00:08:23.738 Max Number of Namespaces: 256 00:08:23.738 Max Number of I/O Queues: 64 00:08:23.738 NVMe Specification Version (VS): 1.4 00:08:23.738 NVMe Specification Version (Identify): 1.4 00:08:23.738 Maximum Queue Entries: 2048 00:08:23.738 Contiguous Queues Required: Yes 00:08:23.738 Arbitration Mechanisms Supported 00:08:23.738 Weighted Round Robin: Not Supported 00:08:23.738 Vendor Specific: Not Supported 00:08:23.738 Reset Timeout: 7500 ms 00:08:23.738 Doorbell Stride: 4 bytes 00:08:23.738 NVM Subsystem Reset: Not Supported 00:08:23.738 Command Sets Supported 00:08:23.738 NVM Command Set: Supported 00:08:23.738 Boot Partition: Not Supported 00:08:23.738 Memory Page Size Minimum: 4096 bytes 00:08:23.738 Memory Page Size Maximum: 65536 bytes 00:08:23.738 Persistent Memory Region: Not Supported 00:08:23.738 Optional Asynchronous Events Supported 00:08:23.738 Namespace Attribute Notices: Supported 00:08:23.738 Firmware Activation Notices: Not Supported 00:08:23.738 ANA Change Notices: Not Supported 00:08:23.738 PLE Aggregate Log Change Notices: Not Supported 00:08:23.738 LBA Status Info Alert Notices: Not Supported 00:08:23.738 EGE Aggregate Log Change Notices: Not Supported 00:08:23.738 Normal NVM Subsystem Shutdown event: Not Supported 00:08:23.738 Zone Descriptor Change Notices: Not Supported 00:08:23.738 Discovery Log Change Notices: Not Supported 00:08:23.738 Controller Attributes 00:08:23.738 128-bit Host Identifier: Not Supported 00:08:23.738 Non-Operational Permissive Mode: Not Supported 00:08:23.738 NVM Sets: Not Supported 00:08:23.738 Read Recovery Levels: Not Supported 00:08:23.738 Endurance Groups: Not Supported 00:08:23.738 Predictable Latency Mode: Not Supported 00:08:23.738 Traffic Based Keep ALive: Not Supported 00:08:23.738 Namespace Granularity: Not Supported 00:08:23.738 SQ Associations: Not Supported 00:08:23.738 UUID List: Not Supported 00:08:23.738 Multi-Domain Subsystem: Not Supported 00:08:23.738 Fixed Capacity Management: Not Supported 00:08:23.738 Variable Capacity Management: Not Supported 00:08:23.738 Delete Endurance Group: Not Supported 00:08:23.738 Delete NVM Set: Not Supported 00:08:23.738 Extended LBA Formats Supported: Supported 00:08:23.738 Flexible Data Placement Supported: Not Supported 00:08:23.738 00:08:23.738 Controller Memory Buffer Support 00:08:23.738 ================================ 00:08:23.738 Supported: No 00:08:23.738 00:08:23.738 Persistent Memory Region Support 00:08:23.738 ================================ 00:08:23.738 Supported: No 00:08:23.738 00:08:23.738 Admin Command Set Attributes 00:08:23.738 ============================ 00:08:23.738 Security Send/Receive: Not Supported 00:08:23.738 Format NVM: Supported 00:08:23.738 Firmware Activate/Download: Not Supported 00:08:23.738 Namespace Management: Supported 00:08:23.738 Device Self-Test: Not Supported 00:08:23.738 Directives: Supported 00:08:23.738 NVMe-MI: Not Supported 00:08:23.738 Virtualization Management: Not Supported 00:08:23.738 Doorbell Buffer Config: Supported 00:08:23.738 Get LBA Status Capability: Not Supported 00:08:23.738 Command & Feature Lockdown Capability: Not Supported 00:08:23.738 Abort Command Limit: 4 00:08:23.738 Async Event Request Limit: 4 00:08:23.738 Number of Firmware Slots: N/A 00:08:23.738 Firmware Slot 1 Read-Only: N/A 00:08:23.738 Firmware Activation Without Reset: N/A 00:08:23.738 Multiple Update Detection Support: N/A 00:08:23.738 Firmware Update Granularity: No Information Provided 00:08:23.738 Per-Namespace SMART Log: Yes 00:08:23.738 Asymmetric Namespace Access Log Page: Not Supported 00:08:23.738 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:23.738 Command Effects Log Page: Supported 00:08:23.738 Get Log Page Extended Data: Supported 00:08:23.738 Telemetry Log Pages: Not Supported 00:08:23.738 Persistent Event Log Pages: Not Supported 00:08:23.738 Supported Log Pages Log Page: May Support 00:08:23.738 Commands Supported & Effects Log Page: Not Supported 00:08:23.738 Feature Identifiers & Effects Log Page:May Support 00:08:23.738 NVMe-MI Commands & Effects Log Page: May Support 00:08:23.738 Data Area 4 for Telemetry Log: Not Supported 00:08:23.738 Error Log Page Entries Supported: 1 00:08:23.738 Keep Alive: Not Supported 00:08:23.738 00:08:23.738 NVM Command Set Attributes 00:08:23.738 ========================== 00:08:23.738 Submission Queue Entry Size 00:08:23.738 Max: 64 00:08:23.738 Min: 64 00:08:23.738 Completion Queue Entry Size 00:08:23.738 Max: 16 00:08:23.738 Min: 16 00:08:23.739 Number of Namespaces: 256 00:08:23.739 Compare Command: Supported 00:08:23.739 Write Uncorrectable Command: Not Supported 00:08:23.739 Dataset Management Command: Supported 00:08:23.739 Write Zeroes Command: Supported 00:08:23.739 Set Features Save Field: Supported 00:08:23.739 Reservations: Not Supported 00:08:23.739 Timestamp: Supported 00:08:23.739 Copy: Supported 00:08:23.739 Volatile Write Cache: Present 00:08:23.739 Atomic Write Unit (Normal): 1 00:08:23.739 Atomic Write Unit (PFail): 1 00:08:23.739 Atomic Compare & Write Unit: 1 00:08:23.739 Fused Compare & Write: Not Supported 00:08:23.739 Scatter-Gather List 00:08:23.739 SGL Command Set: Supported 00:08:23.739 SGL Keyed: Not Supported 00:08:23.739 SGL Bit Bucket Descriptor: Not Supported 00:08:23.739 SGL Metadata Pointer: Not Supported 00:08:23.739 Oversized SGL: Not Supported 00:08:23.739 SGL Metadata Address: Not Supported 00:08:23.739 SGL Offset: Not Supported 00:08:23.739 Transport SGL Data Block: Not Supported 00:08:23.739 Replay Protected Memory Block: Not Supported 00:08:23.739 00:08:23.739 Firmware Slot Information 00:08:23.739 ========================= 00:08:23.739 Active slot: 1 00:08:23.739 Slot 1 Firmware Revision: 1.0 00:08:23.739 00:08:23.739 00:08:23.739 Commands Supported and Effects 00:08:23.739 ============================== 00:08:23.739 Admin Commands 00:08:23.739 -------------- 00:08:23.739 Delete I/O Submission Queue (00h): Supported 00:08:23.739 Create I/O Submission Queue (01h): Supported 00:08:23.739 Get Log Page (02h): Supported 00:08:23.739 Delete I/O Completion Queue (04h): Supported 00:08:23.739 Create I/O Completion Queue (05h): Supported 00:08:23.739 Identify (06h): Supported 00:08:23.739 Abort (08h): Supported 00:08:23.739 Set Features (09h): Supported 00:08:23.739 Get Features (0Ah): Supported 00:08:23.739 Asynchronous Event Request (0Ch): Supported 00:08:23.739 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:23.739 Directive Send (19h): Supported 00:08:23.739 Directive Receive (1Ah): Supported 00:08:23.739 Virtualization Management (1Ch): Supported 00:08:23.739 Doorbell Buffer Config (7Ch): Supported 00:08:23.739 Format NVM (80h): Supported LBA-Change 00:08:23.739 I/O Commands 00:08:23.739 ------------ 00:08:23.739 Flush (00h): Supported LBA-Change 00:08:23.739 Write (01h): Supported LBA-Change 00:08:23.739 Read (02h): Supported 00:08:23.739 Compare (05h): Supported 00:08:23.739 Write Zeroes (08h): Supported LBA-Change 00:08:23.739 Dataset Management (09h): Supported LBA-Change 00:08:23.739 Unknown (0Ch): Supported 00:08:23.739 Unknown (12h): Supported 00:08:23.739 Copy (19h): Supported LBA-Change 00:08:23.739 Unknown (1Dh): Supported LBA-Change 00:08:23.739 00:08:23.739 Error Log 00:08:23.739 ========= 00:08:23.739 00:08:23.739 Arbitration 00:08:23.739 =========== 00:08:23.739 Arbitration Burst: no limit 00:08:23.739 00:08:23.739 Power Management 00:08:23.739 ================ 00:08:23.739 Number of Power States: 1 00:08:23.739 Current Power State: Power State #0 00:08:23.739 Power State #0: 00:08:23.739 Max Power: 25.00 W 00:08:23.739 Non-Operational State: Operational 00:08:23.739 Entry Latency: 16 microseconds 00:08:23.739 Exit Latency: 4 microseconds 00:08:23.739 Relative Read Throughput: 0 00:08:23.739 Relative Read Latency: 0 00:08:23.739 Relative Write Throughput: 0 00:08:23.739 Relative Write Latency: 0 00:08:23.739 Idle Power: Not Reported 00:08:23.739 Active Power: Not Reported 00:08:23.739 Non-Operational Permissive Mode: Not Supported 00:08:23.739 00:08:23.739 Health Information 00:08:23.739 ================== 00:08:23.739 Critical Warnings: 00:08:23.739 Available Spare Space: OK 00:08:23.739 Temperature: OK 00:08:23.739 Device Reliability: OK 00:08:23.739 Read Only: No 00:08:23.739 Volatile Memory Backup: OK 00:08:23.739 Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.739 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:23.739 Available Spare: 0% 00:08:23.739 Available Spare Threshold: 0% 00:08:23.739 Life Percentage Used: 0% 00:08:23.739 Data Units Read: 669 00:08:23.739 Data Units Written: 597 00:08:23.739 Host Read Commands: 33565 00:08:23.739 Host Write Commands: 33351 00:08:23.739 Controller Busy Time: 0 minutes 00:08:23.739 Power Cycles: 0 00:08:23.739 Power On Hours: 0 hours 00:08:23.739 Unsafe Shutdowns: 0 00:08:23.739 Unrecoverable Media Errors: 0 00:08:23.739 Lifetime Error Log Entries: 0 00:08:23.739 Warning Temperature Time: 0 minutes 00:08:23.739 Critical Temperature Time: 0 minutes 00:08:23.739 00:08:23.739 Number of Queues 00:08:23.739 ================ 00:08:23.739 Number of I/O Submission Queues: 64 00:08:23.739 Number of I/O Completion Queues: 64 00:08:23.739 00:08:23.739 ZNS Specific Controller Data 00:08:23.739 ============================ 00:08:23.739 Zone Append Size Limit: 0 00:08:23.739 00:08:23.739 00:08:23.739 Active Namespaces 00:08:23.739 ================= 00:08:23.739 Namespace ID:1 00:08:23.739 Error Recovery Timeout: Unlimited 00:08:23.739 Command Set Identifier: NVM (00h) 00:08:23.739 Deallocate: Supported 00:08:23.739 Deallocated/Unwritten Error: Supported 00:08:23.739 Deallocated Read Value: All 0x00 00:08:23.739 Deallocate in Write Zeroes: Not Supported 00:08:23.739 Deallocated Guard Field: 0xFFFF 00:08:23.739 Flush: Supported 00:08:23.739 Reservation: Not Supported 00:08:23.739 Metadata Transferred as: Separate Metadata Buffer 00:08:23.739 Namespace Sharing Capabilities: Private 00:08:23.739 Size (in LBAs): 1548666 (5GiB) 00:08:23.739 Capacity (in LBAs): 1548666 (5GiB) 00:08:23.739 Utilization (in LBAs): 1548666 (5GiB) 00:08:23.739 Thin Provisioning: Not Supported 00:08:23.739 Per-NS Atomic Units: No 00:08:23.739 Maximum Single Source Range Length: 128 00:08:23.739 Maximum Copy Length: 128 00:08:23.739 Maximum Source Range Count: 128 00:08:23.739 NGUID/EUI64 Never Reused: No 00:08:23.739 Namespace Write Protected: No 00:08:23.739 Number of LBA Formats: 8 00:08:23.739 Current LBA Format: LBA Format #07 00:08:23.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.739 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.739 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.739 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.739 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.739 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.739 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.739 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.739 00:08:23.739 NVM Specific Namespace Data 00:08:23.739 =========================== 00:08:23.739 Logical Block Storage Tag Mask: 0 00:08:23.739 Protection Information Capabilities: 00:08:23.739 16b Guard Protection Information Storage Tag Support: No 00:08:23.739 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.739 Storage Tag Check Read Support: No 00:08:23.739 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.739 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:23.739 13:21:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:24.003 ===================================================== 00:08:24.003 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.003 ===================================================== 00:08:24.003 Controller Capabilities/Features 00:08:24.003 ================================ 00:08:24.003 Vendor ID: 1b36 00:08:24.003 Subsystem Vendor ID: 1af4 00:08:24.003 Serial Number: 12341 00:08:24.003 Model Number: QEMU NVMe Ctrl 00:08:24.003 Firmware Version: 8.0.0 00:08:24.003 Recommended Arb Burst: 6 00:08:24.003 IEEE OUI Identifier: 00 54 52 00:08:24.003 Multi-path I/O 00:08:24.003 May have multiple subsystem ports: No 00:08:24.003 May have multiple controllers: No 00:08:24.003 Associated with SR-IOV VF: No 00:08:24.003 Max Data Transfer Size: 524288 00:08:24.003 Max Number of Namespaces: 256 00:08:24.003 Max Number of I/O Queues: 64 00:08:24.003 NVMe Specification Version (VS): 1.4 00:08:24.003 NVMe Specification Version (Identify): 1.4 00:08:24.003 Maximum Queue Entries: 2048 00:08:24.003 Contiguous Queues Required: Yes 00:08:24.003 Arbitration Mechanisms Supported 00:08:24.003 Weighted Round Robin: Not Supported 00:08:24.003 Vendor Specific: Not Supported 00:08:24.003 Reset Timeout: 7500 ms 00:08:24.003 Doorbell Stride: 4 bytes 00:08:24.003 NVM Subsystem Reset: Not Supported 00:08:24.003 Command Sets Supported 00:08:24.003 NVM Command Set: Supported 00:08:24.003 Boot Partition: Not Supported 00:08:24.003 Memory Page Size Minimum: 4096 bytes 00:08:24.003 Memory Page Size Maximum: 65536 bytes 00:08:24.003 Persistent Memory Region: Not Supported 00:08:24.003 Optional Asynchronous Events Supported 00:08:24.003 Namespace Attribute Notices: Supported 00:08:24.003 Firmware Activation Notices: Not Supported 00:08:24.003 ANA Change Notices: Not Supported 00:08:24.003 PLE Aggregate Log Change Notices: Not Supported 00:08:24.003 LBA Status Info Alert Notices: Not Supported 00:08:24.003 EGE Aggregate Log Change Notices: Not Supported 00:08:24.003 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.003 Zone Descriptor Change Notices: Not Supported 00:08:24.003 Discovery Log Change Notices: Not Supported 00:08:24.003 Controller Attributes 00:08:24.003 128-bit Host Identifier: Not Supported 00:08:24.003 Non-Operational Permissive Mode: Not Supported 00:08:24.003 NVM Sets: Not Supported 00:08:24.003 Read Recovery Levels: Not Supported 00:08:24.003 Endurance Groups: Not Supported 00:08:24.003 Predictable Latency Mode: Not Supported 00:08:24.003 Traffic Based Keep ALive: Not Supported 00:08:24.003 Namespace Granularity: Not Supported 00:08:24.003 SQ Associations: Not Supported 00:08:24.003 UUID List: Not Supported 00:08:24.003 Multi-Domain Subsystem: Not Supported 00:08:24.003 Fixed Capacity Management: Not Supported 00:08:24.003 Variable Capacity Management: Not Supported 00:08:24.003 Delete Endurance Group: Not Supported 00:08:24.003 Delete NVM Set: Not Supported 00:08:24.003 Extended LBA Formats Supported: Supported 00:08:24.003 Flexible Data Placement Supported: Not Supported 00:08:24.003 00:08:24.003 Controller Memory Buffer Support 00:08:24.003 ================================ 00:08:24.003 Supported: No 00:08:24.003 00:08:24.003 Persistent Memory Region Support 00:08:24.003 ================================ 00:08:24.003 Supported: No 00:08:24.003 00:08:24.003 Admin Command Set Attributes 00:08:24.003 ============================ 00:08:24.003 Security Send/Receive: Not Supported 00:08:24.003 Format NVM: Supported 00:08:24.003 Firmware Activate/Download: Not Supported 00:08:24.003 Namespace Management: Supported 00:08:24.003 Device Self-Test: Not Supported 00:08:24.003 Directives: Supported 00:08:24.003 NVMe-MI: Not Supported 00:08:24.003 Virtualization Management: Not Supported 00:08:24.003 Doorbell Buffer Config: Supported 00:08:24.003 Get LBA Status Capability: Not Supported 00:08:24.003 Command & Feature Lockdown Capability: Not Supported 00:08:24.003 Abort Command Limit: 4 00:08:24.003 Async Event Request Limit: 4 00:08:24.003 Number of Firmware Slots: N/A 00:08:24.003 Firmware Slot 1 Read-Only: N/A 00:08:24.003 Firmware Activation Without Reset: N/A 00:08:24.003 Multiple Update Detection Support: N/A 00:08:24.003 Firmware Update Granularity: No Information Provided 00:08:24.003 Per-Namespace SMART Log: Yes 00:08:24.003 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.003 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:24.003 Command Effects Log Page: Supported 00:08:24.003 Get Log Page Extended Data: Supported 00:08:24.003 Telemetry Log Pages: Not Supported 00:08:24.003 Persistent Event Log Pages: Not Supported 00:08:24.003 Supported Log Pages Log Page: May Support 00:08:24.003 Commands Supported & Effects Log Page: Not Supported 00:08:24.003 Feature Identifiers & Effects Log Page:May Support 00:08:24.003 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.003 Data Area 4 for Telemetry Log: Not Supported 00:08:24.003 Error Log Page Entries Supported: 1 00:08:24.003 Keep Alive: Not Supported 00:08:24.003 00:08:24.003 NVM Command Set Attributes 00:08:24.003 ========================== 00:08:24.003 Submission Queue Entry Size 00:08:24.003 Max: 64 00:08:24.003 Min: 64 00:08:24.003 Completion Queue Entry Size 00:08:24.003 Max: 16 00:08:24.003 Min: 16 00:08:24.003 Number of Namespaces: 256 00:08:24.003 Compare Command: Supported 00:08:24.003 Write Uncorrectable Command: Not Supported 00:08:24.003 Dataset Management Command: Supported 00:08:24.003 Write Zeroes Command: Supported 00:08:24.003 Set Features Save Field: Supported 00:08:24.003 Reservations: Not Supported 00:08:24.003 Timestamp: Supported 00:08:24.003 Copy: Supported 00:08:24.003 Volatile Write Cache: Present 00:08:24.003 Atomic Write Unit (Normal): 1 00:08:24.003 Atomic Write Unit (PFail): 1 00:08:24.003 Atomic Compare & Write Unit: 1 00:08:24.003 Fused Compare & Write: Not Supported 00:08:24.003 Scatter-Gather List 00:08:24.003 SGL Command Set: Supported 00:08:24.003 SGL Keyed: Not Supported 00:08:24.003 SGL Bit Bucket Descriptor: Not Supported 00:08:24.003 SGL Metadata Pointer: Not Supported 00:08:24.003 Oversized SGL: Not Supported 00:08:24.003 SGL Metadata Address: Not Supported 00:08:24.003 SGL Offset: Not Supported 00:08:24.003 Transport SGL Data Block: Not Supported 00:08:24.003 Replay Protected Memory Block: Not Supported 00:08:24.003 00:08:24.003 Firmware Slot Information 00:08:24.003 ========================= 00:08:24.003 Active slot: 1 00:08:24.003 Slot 1 Firmware Revision: 1.0 00:08:24.003 00:08:24.003 00:08:24.003 Commands Supported and Effects 00:08:24.003 ============================== 00:08:24.003 Admin Commands 00:08:24.003 -------------- 00:08:24.003 Delete I/O Submission Queue (00h): Supported 00:08:24.003 Create I/O Submission Queue (01h): Supported 00:08:24.003 Get Log Page (02h): Supported 00:08:24.003 Delete I/O Completion Queue (04h): Supported 00:08:24.003 Create I/O Completion Queue (05h): Supported 00:08:24.003 Identify (06h): Supported 00:08:24.003 Abort (08h): Supported 00:08:24.003 Set Features (09h): Supported 00:08:24.003 Get Features (0Ah): Supported 00:08:24.003 Asynchronous Event Request (0Ch): Supported 00:08:24.004 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.004 Directive Send (19h): Supported 00:08:24.004 Directive Receive (1Ah): Supported 00:08:24.004 Virtualization Management (1Ch): Supported 00:08:24.004 Doorbell Buffer Config (7Ch): Supported 00:08:24.004 Format NVM (80h): Supported LBA-Change 00:08:24.004 I/O Commands 00:08:24.004 ------------ 00:08:24.004 Flush (00h): Supported LBA-Change 00:08:24.004 Write (01h): Supported LBA-Change 00:08:24.004 Read (02h): Supported 00:08:24.004 Compare (05h): Supported 00:08:24.004 Write Zeroes (08h): Supported LBA-Change 00:08:24.004 Dataset Management (09h): Supported LBA-Change 00:08:24.004 Unknown (0Ch): Supported 00:08:24.004 Unknown (12h): Supported 00:08:24.004 Copy (19h): Supported LBA-Change 00:08:24.004 Unknown (1Dh): Supported LBA-Change 00:08:24.004 00:08:24.004 Error Log 00:08:24.004 ========= 00:08:24.004 00:08:24.004 Arbitration 00:08:24.004 =========== 00:08:24.004 Arbitration Burst: no limit 00:08:24.004 00:08:24.004 Power Management 00:08:24.004 ================ 00:08:24.004 Number of Power States: 1 00:08:24.004 Current Power State: Power State #0 00:08:24.004 Power State #0: 00:08:24.004 Max Power: 25.00 W 00:08:24.004 Non-Operational State: Operational 00:08:24.004 Entry Latency: 16 microseconds 00:08:24.004 Exit Latency: 4 microseconds 00:08:24.004 Relative Read Throughput: 0 00:08:24.004 Relative Read Latency: 0 00:08:24.004 Relative Write Throughput: 0 00:08:24.004 Relative Write Latency: 0 00:08:24.004 Idle Power: Not Reported 00:08:24.004 Active Power: Not Reported 00:08:24.004 Non-Operational Permissive Mode: Not Supported 00:08:24.004 00:08:24.004 Health Information 00:08:24.004 ================== 00:08:24.004 Critical Warnings: 00:08:24.004 Available Spare Space: OK 00:08:24.004 Temperature: OK 00:08:24.004 Device Reliability: OK 00:08:24.004 Read Only: No 00:08:24.004 Volatile Memory Backup: OK 00:08:24.004 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.004 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.004 Available Spare: 0% 00:08:24.004 Available Spare Threshold: 0% 00:08:24.004 Life Percentage Used: 0% 00:08:24.004 Data Units Read: 954 00:08:24.004 Data Units Written: 821 00:08:24.004 Host Read Commands: 49208 00:08:24.004 Host Write Commands: 48004 00:08:24.004 Controller Busy Time: 0 minutes 00:08:24.004 Power Cycles: 0 00:08:24.004 Power On Hours: 0 hours 00:08:24.004 Unsafe Shutdowns: 0 00:08:24.004 Unrecoverable Media Errors: 0 00:08:24.004 Lifetime Error Log Entries: 0 00:08:24.004 Warning Temperature Time: 0 minutes 00:08:24.004 Critical Temperature Time: 0 minutes 00:08:24.004 00:08:24.004 Number of Queues 00:08:24.004 ================ 00:08:24.004 Number of I/O Submission Queues: 64 00:08:24.004 Number of I/O Completion Queues: 64 00:08:24.004 00:08:24.004 ZNS Specific Controller Data 00:08:24.004 ============================ 00:08:24.004 Zone Append Size Limit: 0 00:08:24.004 00:08:24.004 00:08:24.004 Active Namespaces 00:08:24.004 ================= 00:08:24.004 Namespace ID:1 00:08:24.004 Error Recovery Timeout: Unlimited 00:08:24.004 Command Set Identifier: NVM (00h) 00:08:24.004 Deallocate: Supported 00:08:24.004 Deallocated/Unwritten Error: Supported 00:08:24.004 Deallocated Read Value: All 0x00 00:08:24.004 Deallocate in Write Zeroes: Not Supported 00:08:24.004 Deallocated Guard Field: 0xFFFF 00:08:24.004 Flush: Supported 00:08:24.004 Reservation: Not Supported 00:08:24.004 Namespace Sharing Capabilities: Private 00:08:24.004 Size (in LBAs): 1310720 (5GiB) 00:08:24.004 Capacity (in LBAs): 1310720 (5GiB) 00:08:24.004 Utilization (in LBAs): 1310720 (5GiB) 00:08:24.004 Thin Provisioning: Not Supported 00:08:24.004 Per-NS Atomic Units: No 00:08:24.004 Maximum Single Source Range Length: 128 00:08:24.004 Maximum Copy Length: 128 00:08:24.004 Maximum Source Range Count: 128 00:08:24.004 NGUID/EUI64 Never Reused: No 00:08:24.004 Namespace Write Protected: No 00:08:24.004 Number of LBA Formats: 8 00:08:24.004 Current LBA Format: LBA Format #04 00:08:24.004 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.004 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.004 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.004 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.004 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.004 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.004 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.004 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.004 00:08:24.004 NVM Specific Namespace Data 00:08:24.004 =========================== 00:08:24.004 Logical Block Storage Tag Mask: 0 00:08:24.004 Protection Information Capabilities: 00:08:24.004 16b Guard Protection Information Storage Tag Support: No 00:08:24.004 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.004 Storage Tag Check Read Support: No 00:08:24.004 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.004 13:21:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:24.004 13:21:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:24.267 ===================================================== 00:08:24.267 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.267 ===================================================== 00:08:24.267 Controller Capabilities/Features 00:08:24.267 ================================ 00:08:24.267 Vendor ID: 1b36 00:08:24.267 Subsystem Vendor ID: 1af4 00:08:24.267 Serial Number: 12342 00:08:24.267 Model Number: QEMU NVMe Ctrl 00:08:24.267 Firmware Version: 8.0.0 00:08:24.267 Recommended Arb Burst: 6 00:08:24.267 IEEE OUI Identifier: 00 54 52 00:08:24.267 Multi-path I/O 00:08:24.267 May have multiple subsystem ports: No 00:08:24.267 May have multiple controllers: No 00:08:24.267 Associated with SR-IOV VF: No 00:08:24.267 Max Data Transfer Size: 524288 00:08:24.267 Max Number of Namespaces: 256 00:08:24.267 Max Number of I/O Queues: 64 00:08:24.267 NVMe Specification Version (VS): 1.4 00:08:24.267 NVMe Specification Version (Identify): 1.4 00:08:24.267 Maximum Queue Entries: 2048 00:08:24.267 Contiguous Queues Required: Yes 00:08:24.267 Arbitration Mechanisms Supported 00:08:24.267 Weighted Round Robin: Not Supported 00:08:24.267 Vendor Specific: Not Supported 00:08:24.267 Reset Timeout: 7500 ms 00:08:24.267 Doorbell Stride: 4 bytes 00:08:24.267 NVM Subsystem Reset: Not Supported 00:08:24.267 Command Sets Supported 00:08:24.267 NVM Command Set: Supported 00:08:24.267 Boot Partition: Not Supported 00:08:24.267 Memory Page Size Minimum: 4096 bytes 00:08:24.267 Memory Page Size Maximum: 65536 bytes 00:08:24.267 Persistent Memory Region: Not Supported 00:08:24.267 Optional Asynchronous Events Supported 00:08:24.267 Namespace Attribute Notices: Supported 00:08:24.267 Firmware Activation Notices: Not Supported 00:08:24.267 ANA Change Notices: Not Supported 00:08:24.267 PLE Aggregate Log Change Notices: Not Supported 00:08:24.267 LBA Status Info Alert Notices: Not Supported 00:08:24.267 EGE Aggregate Log Change Notices: Not Supported 00:08:24.267 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.267 Zone Descriptor Change Notices: Not Supported 00:08:24.267 Discovery Log Change Notices: Not Supported 00:08:24.267 Controller Attributes 00:08:24.267 128-bit Host Identifier: Not Supported 00:08:24.267 Non-Operational Permissive Mode: Not Supported 00:08:24.268 NVM Sets: Not Supported 00:08:24.268 Read Recovery Levels: Not Supported 00:08:24.268 Endurance Groups: Not Supported 00:08:24.268 Predictable Latency Mode: Not Supported 00:08:24.268 Traffic Based Keep ALive: Not Supported 00:08:24.268 Namespace Granularity: Not Supported 00:08:24.268 SQ Associations: Not Supported 00:08:24.268 UUID List: Not Supported 00:08:24.268 Multi-Domain Subsystem: Not Supported 00:08:24.268 Fixed Capacity Management: Not Supported 00:08:24.268 Variable Capacity Management: Not Supported 00:08:24.268 Delete Endurance Group: Not Supported 00:08:24.268 Delete NVM Set: Not Supported 00:08:24.268 Extended LBA Formats Supported: Supported 00:08:24.268 Flexible Data Placement Supported: Not Supported 00:08:24.268 00:08:24.268 Controller Memory Buffer Support 00:08:24.268 ================================ 00:08:24.268 Supported: No 00:08:24.268 00:08:24.268 Persistent Memory Region Support 00:08:24.268 ================================ 00:08:24.268 Supported: No 00:08:24.268 00:08:24.268 Admin Command Set Attributes 00:08:24.268 ============================ 00:08:24.268 Security Send/Receive: Not Supported 00:08:24.268 Format NVM: Supported 00:08:24.268 Firmware Activate/Download: Not Supported 00:08:24.268 Namespace Management: Supported 00:08:24.268 Device Self-Test: Not Supported 00:08:24.268 Directives: Supported 00:08:24.268 NVMe-MI: Not Supported 00:08:24.268 Virtualization Management: Not Supported 00:08:24.268 Doorbell Buffer Config: Supported 00:08:24.268 Get LBA Status Capability: Not Supported 00:08:24.268 Command & Feature Lockdown Capability: Not Supported 00:08:24.268 Abort Command Limit: 4 00:08:24.268 Async Event Request Limit: 4 00:08:24.268 Number of Firmware Slots: N/A 00:08:24.268 Firmware Slot 1 Read-Only: N/A 00:08:24.268 Firmware Activation Without Reset: N/A 00:08:24.268 Multiple Update Detection Support: N/A 00:08:24.268 Firmware Update Granularity: No Information Provided 00:08:24.268 Per-Namespace SMART Log: Yes 00:08:24.268 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.268 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:24.268 Command Effects Log Page: Supported 00:08:24.268 Get Log Page Extended Data: Supported 00:08:24.268 Telemetry Log Pages: Not Supported 00:08:24.268 Persistent Event Log Pages: Not Supported 00:08:24.268 Supported Log Pages Log Page: May Support 00:08:24.268 Commands Supported & Effects Log Page: Not Supported 00:08:24.268 Feature Identifiers & Effects Log Page:May Support 00:08:24.268 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.268 Data Area 4 for Telemetry Log: Not Supported 00:08:24.268 Error Log Page Entries Supported: 1 00:08:24.268 Keep Alive: Not Supported 00:08:24.268 00:08:24.268 NVM Command Set Attributes 00:08:24.268 ========================== 00:08:24.268 Submission Queue Entry Size 00:08:24.268 Max: 64 00:08:24.268 Min: 64 00:08:24.268 Completion Queue Entry Size 00:08:24.268 Max: 16 00:08:24.268 Min: 16 00:08:24.268 Number of Namespaces: 256 00:08:24.268 Compare Command: Supported 00:08:24.268 Write Uncorrectable Command: Not Supported 00:08:24.268 Dataset Management Command: Supported 00:08:24.268 Write Zeroes Command: Supported 00:08:24.268 Set Features Save Field: Supported 00:08:24.268 Reservations: Not Supported 00:08:24.268 Timestamp: Supported 00:08:24.268 Copy: Supported 00:08:24.268 Volatile Write Cache: Present 00:08:24.268 Atomic Write Unit (Normal): 1 00:08:24.268 Atomic Write Unit (PFail): 1 00:08:24.268 Atomic Compare & Write Unit: 1 00:08:24.268 Fused Compare & Write: Not Supported 00:08:24.268 Scatter-Gather List 00:08:24.268 SGL Command Set: Supported 00:08:24.268 SGL Keyed: Not Supported 00:08:24.268 SGL Bit Bucket Descriptor: Not Supported 00:08:24.268 SGL Metadata Pointer: Not Supported 00:08:24.268 Oversized SGL: Not Supported 00:08:24.268 SGL Metadata Address: Not Supported 00:08:24.268 SGL Offset: Not Supported 00:08:24.268 Transport SGL Data Block: Not Supported 00:08:24.268 Replay Protected Memory Block: Not Supported 00:08:24.268 00:08:24.268 Firmware Slot Information 00:08:24.268 ========================= 00:08:24.268 Active slot: 1 00:08:24.268 Slot 1 Firmware Revision: 1.0 00:08:24.268 00:08:24.268 00:08:24.268 Commands Supported and Effects 00:08:24.268 ============================== 00:08:24.268 Admin Commands 00:08:24.268 -------------- 00:08:24.268 Delete I/O Submission Queue (00h): Supported 00:08:24.268 Create I/O Submission Queue (01h): Supported 00:08:24.268 Get Log Page (02h): Supported 00:08:24.268 Delete I/O Completion Queue (04h): Supported 00:08:24.268 Create I/O Completion Queue (05h): Supported 00:08:24.268 Identify (06h): Supported 00:08:24.268 Abort (08h): Supported 00:08:24.268 Set Features (09h): Supported 00:08:24.268 Get Features (0Ah): Supported 00:08:24.268 Asynchronous Event Request (0Ch): Supported 00:08:24.268 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.268 Directive Send (19h): Supported 00:08:24.268 Directive Receive (1Ah): Supported 00:08:24.268 Virtualization Management (1Ch): Supported 00:08:24.268 Doorbell Buffer Config (7Ch): Supported 00:08:24.268 Format NVM (80h): Supported LBA-Change 00:08:24.268 I/O Commands 00:08:24.268 ------------ 00:08:24.268 Flush (00h): Supported LBA-Change 00:08:24.268 Write (01h): Supported LBA-Change 00:08:24.268 Read (02h): Supported 00:08:24.268 Compare (05h): Supported 00:08:24.268 Write Zeroes (08h): Supported LBA-Change 00:08:24.268 Dataset Management (09h): Supported LBA-Change 00:08:24.268 Unknown (0Ch): Supported 00:08:24.268 Unknown (12h): Supported 00:08:24.268 Copy (19h): Supported LBA-Change 00:08:24.268 Unknown (1Dh): Supported LBA-Change 00:08:24.268 00:08:24.268 Error Log 00:08:24.268 ========= 00:08:24.268 00:08:24.268 Arbitration 00:08:24.268 =========== 00:08:24.268 Arbitration Burst: no limit 00:08:24.268 00:08:24.268 Power Management 00:08:24.268 ================ 00:08:24.268 Number of Power States: 1 00:08:24.268 Current Power State: Power State #0 00:08:24.268 Power State #0: 00:08:24.268 Max Power: 25.00 W 00:08:24.268 Non-Operational State: Operational 00:08:24.268 Entry Latency: 16 microseconds 00:08:24.268 Exit Latency: 4 microseconds 00:08:24.268 Relative Read Throughput: 0 00:08:24.268 Relative Read Latency: 0 00:08:24.268 Relative Write Throughput: 0 00:08:24.268 Relative Write Latency: 0 00:08:24.268 Idle Power: Not Reported 00:08:24.268 Active Power: Not Reported 00:08:24.268 Non-Operational Permissive Mode: Not Supported 00:08:24.268 00:08:24.268 Health Information 00:08:24.268 ================== 00:08:24.268 Critical Warnings: 00:08:24.268 Available Spare Space: OK 00:08:24.268 Temperature: OK 00:08:24.268 Device Reliability: OK 00:08:24.268 Read Only: No 00:08:24.268 Volatile Memory Backup: OK 00:08:24.268 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.268 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.268 Available Spare: 0% 00:08:24.268 Available Spare Threshold: 0% 00:08:24.268 Life Percentage Used: 0% 00:08:24.268 Data Units Read: 2021 00:08:24.268 Data Units Written: 1808 00:08:24.268 Host Read Commands: 101470 00:08:24.268 Host Write Commands: 99739 00:08:24.268 Controller Busy Time: 0 minutes 00:08:24.268 Power Cycles: 0 00:08:24.268 Power On Hours: 0 hours 00:08:24.268 Unsafe Shutdowns: 0 00:08:24.268 Unrecoverable Media Errors: 0 00:08:24.268 Lifetime Error Log Entries: 0 00:08:24.268 Warning Temperature Time: 0 minutes 00:08:24.268 Critical Temperature Time: 0 minutes 00:08:24.268 00:08:24.268 Number of Queues 00:08:24.268 ================ 00:08:24.269 Number of I/O Submission Queues: 64 00:08:24.269 Number of I/O Completion Queues: 64 00:08:24.269 00:08:24.269 ZNS Specific Controller Data 00:08:24.269 ============================ 00:08:24.269 Zone Append Size Limit: 0 00:08:24.269 00:08:24.269 00:08:24.269 Active Namespaces 00:08:24.269 ================= 00:08:24.269 Namespace ID:1 00:08:24.269 Error Recovery Timeout: Unlimited 00:08:24.269 Command Set Identifier: NVM (00h) 00:08:24.269 Deallocate: Supported 00:08:24.269 Deallocated/Unwritten Error: Supported 00:08:24.269 Deallocated Read Value: All 0x00 00:08:24.269 Deallocate in Write Zeroes: Not Supported 00:08:24.269 Deallocated Guard Field: 0xFFFF 00:08:24.269 Flush: Supported 00:08:24.269 Reservation: Not Supported 00:08:24.269 Namespace Sharing Capabilities: Private 00:08:24.269 Size (in LBAs): 1048576 (4GiB) 00:08:24.269 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.269 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.269 Thin Provisioning: Not Supported 00:08:24.269 Per-NS Atomic Units: No 00:08:24.269 Maximum Single Source Range Length: 128 00:08:24.269 Maximum Copy Length: 128 00:08:24.269 Maximum Source Range Count: 128 00:08:24.269 NGUID/EUI64 Never Reused: No 00:08:24.269 Namespace Write Protected: No 00:08:24.269 Number of LBA Formats: 8 00:08:24.269 Current LBA Format: LBA Format #04 00:08:24.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.269 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.269 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.269 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.269 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.269 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.269 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.269 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.269 00:08:24.269 NVM Specific Namespace Data 00:08:24.269 =========================== 00:08:24.269 Logical Block Storage Tag Mask: 0 00:08:24.269 Protection Information Capabilities: 00:08:24.269 16b Guard Protection Information Storage Tag Support: No 00:08:24.269 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.269 Storage Tag Check Read Support: No 00:08:24.269 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Namespace ID:2 00:08:24.269 Error Recovery Timeout: Unlimited 00:08:24.269 Command Set Identifier: NVM (00h) 00:08:24.269 Deallocate: Supported 00:08:24.269 Deallocated/Unwritten Error: Supported 00:08:24.269 Deallocated Read Value: All 0x00 00:08:24.269 Deallocate in Write Zeroes: Not Supported 00:08:24.269 Deallocated Guard Field: 0xFFFF 00:08:24.269 Flush: Supported 00:08:24.269 Reservation: Not Supported 00:08:24.269 Namespace Sharing Capabilities: Private 00:08:24.269 Size (in LBAs): 1048576 (4GiB) 00:08:24.269 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.269 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.269 Thin Provisioning: Not Supported 00:08:24.269 Per-NS Atomic Units: No 00:08:24.269 Maximum Single Source Range Length: 128 00:08:24.269 Maximum Copy Length: 128 00:08:24.269 Maximum Source Range Count: 128 00:08:24.269 NGUID/EUI64 Never Reused: No 00:08:24.269 Namespace Write Protected: No 00:08:24.269 Number of LBA Formats: 8 00:08:24.269 Current LBA Format: LBA Format #04 00:08:24.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.269 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.269 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.269 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.269 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.269 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.269 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.269 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.269 00:08:24.269 NVM Specific Namespace Data 00:08:24.269 =========================== 00:08:24.269 Logical Block Storage Tag Mask: 0 00:08:24.269 Protection Information Capabilities: 00:08:24.269 16b Guard Protection Information Storage Tag Support: No 00:08:24.269 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.269 Storage Tag Check Read Support: No 00:08:24.269 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.269 Namespace ID:3 00:08:24.269 Error Recovery Timeout: Unlimited 00:08:24.269 Command Set Identifier: NVM (00h) 00:08:24.269 Deallocate: Supported 00:08:24.269 Deallocated/Unwritten Error: Supported 00:08:24.269 Deallocated Read Value: All 0x00 00:08:24.269 Deallocate in Write Zeroes: Not Supported 00:08:24.269 Deallocated Guard Field: 0xFFFF 00:08:24.269 Flush: Supported 00:08:24.269 Reservation: Not Supported 00:08:24.269 Namespace Sharing Capabilities: Private 00:08:24.269 Size (in LBAs): 1048576 (4GiB) 00:08:24.269 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.269 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.269 Thin Provisioning: Not Supported 00:08:24.269 Per-NS Atomic Units: No 00:08:24.269 Maximum Single Source Range Length: 128 00:08:24.269 Maximum Copy Length: 128 00:08:24.269 Maximum Source Range Count: 128 00:08:24.269 NGUID/EUI64 Never Reused: No 00:08:24.269 Namespace Write Protected: No 00:08:24.269 Number of LBA Formats: 8 00:08:24.269 Current LBA Format: LBA Format #04 00:08:24.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.269 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.269 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.269 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.269 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.269 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.269 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.269 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.269 00:08:24.269 NVM Specific Namespace Data 00:08:24.269 =========================== 00:08:24.269 Logical Block Storage Tag Mask: 0 00:08:24.269 Protection Information Capabilities: 00:08:24.269 16b Guard Protection Information Storage Tag Support: No 00:08:24.269 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.270 Storage Tag Check Read Support: No 00:08:24.270 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 13:21:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:24.270 13:21:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:24.532 ===================================================== 00:08:24.532 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.532 ===================================================== 00:08:24.532 Controller Capabilities/Features 00:08:24.532 ================================ 00:08:24.532 Vendor ID: 1b36 00:08:24.532 Subsystem Vendor ID: 1af4 00:08:24.532 Serial Number: 12343 00:08:24.532 Model Number: QEMU NVMe Ctrl 00:08:24.532 Firmware Version: 8.0.0 00:08:24.532 Recommended Arb Burst: 6 00:08:24.532 IEEE OUI Identifier: 00 54 52 00:08:24.532 Multi-path I/O 00:08:24.532 May have multiple subsystem ports: No 00:08:24.532 May have multiple controllers: Yes 00:08:24.532 Associated with SR-IOV VF: No 00:08:24.532 Max Data Transfer Size: 524288 00:08:24.532 Max Number of Namespaces: 256 00:08:24.532 Max Number of I/O Queues: 64 00:08:24.532 NVMe Specification Version (VS): 1.4 00:08:24.532 NVMe Specification Version (Identify): 1.4 00:08:24.532 Maximum Queue Entries: 2048 00:08:24.532 Contiguous Queues Required: Yes 00:08:24.532 Arbitration Mechanisms Supported 00:08:24.532 Weighted Round Robin: Not Supported 00:08:24.532 Vendor Specific: Not Supported 00:08:24.532 Reset Timeout: 7500 ms 00:08:24.532 Doorbell Stride: 4 bytes 00:08:24.532 NVM Subsystem Reset: Not Supported 00:08:24.532 Command Sets Supported 00:08:24.532 NVM Command Set: Supported 00:08:24.532 Boot Partition: Not Supported 00:08:24.532 Memory Page Size Minimum: 4096 bytes 00:08:24.532 Memory Page Size Maximum: 65536 bytes 00:08:24.532 Persistent Memory Region: Not Supported 00:08:24.532 Optional Asynchronous Events Supported 00:08:24.532 Namespace Attribute Notices: Supported 00:08:24.532 Firmware Activation Notices: Not Supported 00:08:24.532 ANA Change Notices: Not Supported 00:08:24.532 PLE Aggregate Log Change Notices: Not Supported 00:08:24.532 LBA Status Info Alert Notices: Not Supported 00:08:24.532 EGE Aggregate Log Change Notices: Not Supported 00:08:24.532 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.532 Zone Descriptor Change Notices: Not Supported 00:08:24.532 Discovery Log Change Notices: Not Supported 00:08:24.532 Controller Attributes 00:08:24.532 128-bit Host Identifier: Not Supported 00:08:24.532 Non-Operational Permissive Mode: Not Supported 00:08:24.532 NVM Sets: Not Supported 00:08:24.532 Read Recovery Levels: Not Supported 00:08:24.532 Endurance Groups: Supported 00:08:24.532 Predictable Latency Mode: Not Supported 00:08:24.532 Traffic Based Keep ALive: Not Supported 00:08:24.532 Namespace Granularity: Not Supported 00:08:24.532 SQ Associations: Not Supported 00:08:24.532 UUID List: Not Supported 00:08:24.532 Multi-Domain Subsystem: Not Supported 00:08:24.532 Fixed Capacity Management: Not Supported 00:08:24.532 Variable Capacity Management: Not Supported 00:08:24.532 Delete Endurance Group: Not Supported 00:08:24.532 Delete NVM Set: Not Supported 00:08:24.532 Extended LBA Formats Supported: Supported 00:08:24.532 Flexible Data Placement Supported: Supported 00:08:24.532 00:08:24.532 Controller Memory Buffer Support 00:08:24.532 ================================ 00:08:24.532 Supported: No 00:08:24.532 00:08:24.532 Persistent Memory Region Support 00:08:24.532 ================================ 00:08:24.532 Supported: No 00:08:24.532 00:08:24.532 Admin Command Set Attributes 00:08:24.532 ============================ 00:08:24.532 Security Send/Receive: Not Supported 00:08:24.532 Format NVM: Supported 00:08:24.532 Firmware Activate/Download: Not Supported 00:08:24.532 Namespace Management: Supported 00:08:24.532 Device Self-Test: Not Supported 00:08:24.532 Directives: Supported 00:08:24.532 NVMe-MI: Not Supported 00:08:24.532 Virtualization Management: Not Supported 00:08:24.532 Doorbell Buffer Config: Supported 00:08:24.532 Get LBA Status Capability: Not Supported 00:08:24.532 Command & Feature Lockdown Capability: Not Supported 00:08:24.532 Abort Command Limit: 4 00:08:24.532 Async Event Request Limit: 4 00:08:24.532 Number of Firmware Slots: N/A 00:08:24.532 Firmware Slot 1 Read-Only: N/A 00:08:24.532 Firmware Activation Without Reset: N/A 00:08:24.532 Multiple Update Detection Support: N/A 00:08:24.532 Firmware Update Granularity: No Information Provided 00:08:24.532 Per-Namespace SMART Log: Yes 00:08:24.532 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.532 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:24.532 Command Effects Log Page: Supported 00:08:24.532 Get Log Page Extended Data: Supported 00:08:24.532 Telemetry Log Pages: Not Supported 00:08:24.532 Persistent Event Log Pages: Not Supported 00:08:24.532 Supported Log Pages Log Page: May Support 00:08:24.532 Commands Supported & Effects Log Page: Not Supported 00:08:24.532 Feature Identifiers & Effects Log Page:May Support 00:08:24.532 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.532 Data Area 4 for Telemetry Log: Not Supported 00:08:24.532 Error Log Page Entries Supported: 1 00:08:24.532 Keep Alive: Not Supported 00:08:24.532 00:08:24.532 NVM Command Set Attributes 00:08:24.532 ========================== 00:08:24.532 Submission Queue Entry Size 00:08:24.532 Max: 64 00:08:24.532 Min: 64 00:08:24.532 Completion Queue Entry Size 00:08:24.532 Max: 16 00:08:24.532 Min: 16 00:08:24.532 Number of Namespaces: 256 00:08:24.532 Compare Command: Supported 00:08:24.532 Write Uncorrectable Command: Not Supported 00:08:24.532 Dataset Management Command: Supported 00:08:24.532 Write Zeroes Command: Supported 00:08:24.532 Set Features Save Field: Supported 00:08:24.532 Reservations: Not Supported 00:08:24.532 Timestamp: Supported 00:08:24.532 Copy: Supported 00:08:24.532 Volatile Write Cache: Present 00:08:24.532 Atomic Write Unit (Normal): 1 00:08:24.532 Atomic Write Unit (PFail): 1 00:08:24.532 Atomic Compare & Write Unit: 1 00:08:24.532 Fused Compare & Write: Not Supported 00:08:24.533 Scatter-Gather List 00:08:24.533 SGL Command Set: Supported 00:08:24.533 SGL Keyed: Not Supported 00:08:24.533 SGL Bit Bucket Descriptor: Not Supported 00:08:24.533 SGL Metadata Pointer: Not Supported 00:08:24.533 Oversized SGL: Not Supported 00:08:24.533 SGL Metadata Address: Not Supported 00:08:24.533 SGL Offset: Not Supported 00:08:24.533 Transport SGL Data Block: Not Supported 00:08:24.533 Replay Protected Memory Block: Not Supported 00:08:24.533 00:08:24.533 Firmware Slot Information 00:08:24.533 ========================= 00:08:24.533 Active slot: 1 00:08:24.533 Slot 1 Firmware Revision: 1.0 00:08:24.533 00:08:24.533 00:08:24.533 Commands Supported and Effects 00:08:24.533 ============================== 00:08:24.533 Admin Commands 00:08:24.533 -------------- 00:08:24.533 Delete I/O Submission Queue (00h): Supported 00:08:24.533 Create I/O Submission Queue (01h): Supported 00:08:24.533 Get Log Page (02h): Supported 00:08:24.533 Delete I/O Completion Queue (04h): Supported 00:08:24.533 Create I/O Completion Queue (05h): Supported 00:08:24.533 Identify (06h): Supported 00:08:24.533 Abort (08h): Supported 00:08:24.533 Set Features (09h): Supported 00:08:24.533 Get Features (0Ah): Supported 00:08:24.533 Asynchronous Event Request (0Ch): Supported 00:08:24.533 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.533 Directive Send (19h): Supported 00:08:24.533 Directive Receive (1Ah): Supported 00:08:24.533 Virtualization Management (1Ch): Supported 00:08:24.533 Doorbell Buffer Config (7Ch): Supported 00:08:24.533 Format NVM (80h): Supported LBA-Change 00:08:24.533 I/O Commands 00:08:24.533 ------------ 00:08:24.533 Flush (00h): Supported LBA-Change 00:08:24.533 Write (01h): Supported LBA-Change 00:08:24.533 Read (02h): Supported 00:08:24.533 Compare (05h): Supported 00:08:24.533 Write Zeroes (08h): Supported LBA-Change 00:08:24.533 Dataset Management (09h): Supported LBA-Change 00:08:24.533 Unknown (0Ch): Supported 00:08:24.533 Unknown (12h): Supported 00:08:24.533 Copy (19h): Supported LBA-Change 00:08:24.533 Unknown (1Dh): Supported LBA-Change 00:08:24.533 00:08:24.533 Error Log 00:08:24.533 ========= 00:08:24.533 00:08:24.533 Arbitration 00:08:24.533 =========== 00:08:24.533 Arbitration Burst: no limit 00:08:24.533 00:08:24.533 Power Management 00:08:24.533 ================ 00:08:24.533 Number of Power States: 1 00:08:24.533 Current Power State: Power State #0 00:08:24.533 Power State #0: 00:08:24.533 Max Power: 25.00 W 00:08:24.533 Non-Operational State: Operational 00:08:24.533 Entry Latency: 16 microseconds 00:08:24.533 Exit Latency: 4 microseconds 00:08:24.533 Relative Read Throughput: 0 00:08:24.533 Relative Read Latency: 0 00:08:24.533 Relative Write Throughput: 0 00:08:24.533 Relative Write Latency: 0 00:08:24.533 Idle Power: Not Reported 00:08:24.533 Active Power: Not Reported 00:08:24.533 Non-Operational Permissive Mode: Not Supported 00:08:24.533 00:08:24.533 Health Information 00:08:24.533 ================== 00:08:24.533 Critical Warnings: 00:08:24.533 Available Spare Space: OK 00:08:24.533 Temperature: OK 00:08:24.533 Device Reliability: OK 00:08:24.533 Read Only: No 00:08:24.533 Volatile Memory Backup: OK 00:08:24.533 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.533 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.533 Available Spare: 0% 00:08:24.533 Available Spare Threshold: 0% 00:08:24.533 Life Percentage Used: 0% 00:08:24.533 Data Units Read: 750 00:08:24.533 Data Units Written: 679 00:08:24.533 Host Read Commands: 34433 00:08:24.533 Host Write Commands: 33856 00:08:24.533 Controller Busy Time: 0 minutes 00:08:24.533 Power Cycles: 0 00:08:24.533 Power On Hours: 0 hours 00:08:24.533 Unsafe Shutdowns: 0 00:08:24.533 Unrecoverable Media Errors: 0 00:08:24.533 Lifetime Error Log Entries: 0 00:08:24.533 Warning Temperature Time: 0 minutes 00:08:24.533 Critical Temperature Time: 0 minutes 00:08:24.533 00:08:24.533 Number of Queues 00:08:24.533 ================ 00:08:24.533 Number of I/O Submission Queues: 64 00:08:24.533 Number of I/O Completion Queues: 64 00:08:24.533 00:08:24.533 ZNS Specific Controller Data 00:08:24.533 ============================ 00:08:24.533 Zone Append Size Limit: 0 00:08:24.533 00:08:24.533 00:08:24.533 Active Namespaces 00:08:24.533 ================= 00:08:24.533 Namespace ID:1 00:08:24.533 Error Recovery Timeout: Unlimited 00:08:24.533 Command Set Identifier: NVM (00h) 00:08:24.533 Deallocate: Supported 00:08:24.533 Deallocated/Unwritten Error: Supported 00:08:24.533 Deallocated Read Value: All 0x00 00:08:24.533 Deallocate in Write Zeroes: Not Supported 00:08:24.533 Deallocated Guard Field: 0xFFFF 00:08:24.533 Flush: Supported 00:08:24.533 Reservation: Not Supported 00:08:24.533 Namespace Sharing Capabilities: Multiple Controllers 00:08:24.533 Size (in LBAs): 262144 (1GiB) 00:08:24.533 Capacity (in LBAs): 262144 (1GiB) 00:08:24.533 Utilization (in LBAs): 262144 (1GiB) 00:08:24.533 Thin Provisioning: Not Supported 00:08:24.533 Per-NS Atomic Units: No 00:08:24.533 Maximum Single Source Range Length: 128 00:08:24.533 Maximum Copy Length: 128 00:08:24.533 Maximum Source Range Count: 128 00:08:24.533 NGUID/EUI64 Never Reused: No 00:08:24.533 Namespace Write Protected: No 00:08:24.533 Endurance group ID: 1 00:08:24.533 Number of LBA Formats: 8 00:08:24.533 Current LBA Format: LBA Format #04 00:08:24.533 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.533 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.533 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.533 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.533 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.533 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.533 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.533 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.533 00:08:24.533 Get Feature FDP: 00:08:24.533 ================ 00:08:24.533 Enabled: Yes 00:08:24.533 FDP configuration index: 0 00:08:24.533 00:08:24.533 FDP configurations log page 00:08:24.533 =========================== 00:08:24.533 Number of FDP configurations: 1 00:08:24.533 Version: 0 00:08:24.533 Size: 112 00:08:24.533 FDP Configuration Descriptor: 0 00:08:24.533 Descriptor Size: 96 00:08:24.533 Reclaim Group Identifier format: 2 00:08:24.533 FDP Volatile Write Cache: Not Present 00:08:24.533 FDP Configuration: Valid 00:08:24.533 Vendor Specific Size: 0 00:08:24.533 Number of Reclaim Groups: 2 00:08:24.533 Number of Recalim Unit Handles: 8 00:08:24.533 Max Placement Identifiers: 128 00:08:24.533 Number of Namespaces Suppprted: 256 00:08:24.533 Reclaim unit Nominal Size: 6000000 bytes 00:08:24.533 Estimated Reclaim Unit Time Limit: Not Reported 00:08:24.533 RUH Desc #000: RUH Type: Initially Isolated 00:08:24.533 RUH Desc #001: RUH Type: Initially Isolated 00:08:24.533 RUH Desc #002: RUH Type: Initially Isolated 00:08:24.533 RUH Desc #003: RUH Type: Initially Isolated 00:08:24.533 RUH Desc #004: RUH Type: Initially Isolated 00:08:24.533 RUH Desc #005: RUH Type: Initially Isolated 00:08:24.533 RUH Desc #006: RUH Type: Initially Isolated 00:08:24.534 RUH Desc #007: RUH Type: Initially Isolated 00:08:24.534 00:08:24.534 FDP reclaim unit handle usage log page 00:08:24.534 ====================================== 00:08:24.534 Number of Reclaim Unit Handles: 8 00:08:24.534 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:24.534 RUH Usage Desc #001: RUH Attributes: Unused 00:08:24.534 RUH Usage Desc #002: RUH Attributes: Unused 00:08:24.534 RUH Usage Desc #003: RUH Attributes: Unused 00:08:24.534 RUH Usage Desc #004: RUH Attributes: Unused 00:08:24.534 RUH Usage Desc #005: RUH Attributes: Unused 00:08:24.534 RUH Usage Desc #006: RUH Attributes: Unused 00:08:24.534 RUH Usage Desc #007: RUH Attributes: Unused 00:08:24.534 00:08:24.534 FDP statistics log page 00:08:24.534 ======================= 00:08:24.534 Host bytes with metadata written: 419012608 00:08:24.534 Media bytes with metadata written: 419057664 00:08:24.534 Media bytes erased: 0 00:08:24.534 00:08:24.534 FDP events log page 00:08:24.534 =================== 00:08:24.534 Number of FDP events: 0 00:08:24.534 00:08:24.534 NVM Specific Namespace Data 00:08:24.534 =========================== 00:08:24.534 Logical Block Storage Tag Mask: 0 00:08:24.534 Protection Information Capabilities: 00:08:24.534 16b Guard Protection Information Storage Tag Support: No 00:08:24.534 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.534 Storage Tag Check Read Support: No 00:08:24.534 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.534 00:08:24.534 real 0m1.212s 00:08:24.534 user 0m0.450s 00:08:24.534 sys 0m0.547s 00:08:24.534 13:21:20 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.534 ************************************ 00:08:24.534 END TEST nvme_identify 00:08:24.534 ************************************ 00:08:24.534 13:21:20 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:24.534 13:21:20 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:24.534 13:21:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.534 13:21:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.534 13:21:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.534 ************************************ 00:08:24.534 START TEST nvme_perf 00:08:24.534 ************************************ 00:08:24.534 13:21:20 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:24.534 13:21:20 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:25.923 Initializing NVMe Controllers 00:08:25.923 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.923 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.923 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.923 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.923 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:25.923 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:25.923 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:25.923 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:25.923 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:25.923 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:25.923 Initialization complete. Launching workers. 00:08:25.923 ======================================================== 00:08:25.923 Latency(us) 00:08:25.923 Device Information : IOPS MiB/s Average min max 00:08:25.923 PCIE (0000:00:10.0) NSID 1 from core 0: 7186.50 84.22 17820.76 9325.52 41807.80 00:08:25.923 PCIE (0000:00:11.0) NSID 1 from core 0: 7186.50 84.22 17805.02 8716.05 41340.67 00:08:25.923 PCIE (0000:00:13.0) NSID 1 from core 0: 7186.50 84.22 17785.21 7279.03 41325.43 00:08:25.923 PCIE (0000:00:12.0) NSID 1 from core 0: 7186.50 84.22 17765.01 6433.27 40971.72 00:08:25.923 PCIE (0000:00:12.0) NSID 2 from core 0: 7186.50 84.22 17743.61 5654.93 40479.31 00:08:25.923 PCIE (0000:00:12.0) NSID 3 from core 0: 7186.50 84.22 17723.83 4965.04 40239.03 00:08:25.923 ======================================================== 00:08:25.923 Total : 43118.97 505.30 17773.91 4965.04 41807.80 00:08:25.923 00:08:25.923 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:25.923 ================================================================================= 00:08:25.923 1.00000% : 13712.148us 00:08:25.923 10.00000% : 15022.868us 00:08:25.923 25.00000% : 15930.289us 00:08:25.923 50.00000% : 17442.658us 00:08:25.923 75.00000% : 19055.852us 00:08:25.923 90.00000% : 20971.520us 00:08:25.923 95.00000% : 21778.117us 00:08:25.923 98.00000% : 24097.083us 00:08:25.923 99.00000% : 29037.489us 00:08:25.923 99.50000% : 40531.495us 00:08:25.923 99.90000% : 41539.742us 00:08:25.923 99.99000% : 41943.040us 00:08:25.924 99.99900% : 41943.040us 00:08:25.924 99.99990% : 41943.040us 00:08:25.924 99.99999% : 41943.040us 00:08:25.924 00:08:25.924 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:25.924 ================================================================================= 00:08:25.924 1.00000% : 13611.323us 00:08:25.924 10.00000% : 15123.692us 00:08:25.924 25.00000% : 15930.289us 00:08:25.924 50.00000% : 17442.658us 00:08:25.924 75.00000% : 19055.852us 00:08:25.924 90.00000% : 20870.695us 00:08:25.924 95.00000% : 21778.117us 00:08:25.924 98.00000% : 24399.557us 00:08:25.924 99.00000% : 30247.385us 00:08:25.924 99.50000% : 40329.846us 00:08:25.924 99.90000% : 41136.443us 00:08:25.924 99.99000% : 41539.742us 00:08:25.924 99.99900% : 41539.742us 00:08:25.924 99.99990% : 41539.742us 00:08:25.924 99.99999% : 41539.742us 00:08:25.924 00:08:25.924 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:25.924 ================================================================================= 00:08:25.924 1.00000% : 13107.200us 00:08:25.924 10.00000% : 15022.868us 00:08:25.924 25.00000% : 15930.289us 00:08:25.924 50.00000% : 17341.834us 00:08:25.924 75.00000% : 19055.852us 00:08:25.924 90.00000% : 21072.345us 00:08:25.924 95.00000% : 21878.942us 00:08:25.924 98.00000% : 25609.452us 00:08:25.924 99.00000% : 31255.631us 00:08:25.924 99.50000% : 40733.145us 00:08:25.924 99.90000% : 41338.092us 00:08:25.924 99.99000% : 41338.092us 00:08:25.924 99.99900% : 41338.092us 00:08:25.924 99.99990% : 41338.092us 00:08:25.924 99.99999% : 41338.092us 00:08:25.924 00:08:25.924 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:25.924 ================================================================================= 00:08:25.924 1.00000% : 12250.191us 00:08:25.924 10.00000% : 15123.692us 00:08:25.924 25.00000% : 16031.114us 00:08:25.924 50.00000% : 17341.834us 00:08:25.924 75.00000% : 19055.852us 00:08:25.924 90.00000% : 20971.520us 00:08:25.924 95.00000% : 21778.117us 00:08:25.924 98.00000% : 24399.557us 00:08:25.924 99.00000% : 31255.631us 00:08:25.924 99.50000% : 40329.846us 00:08:25.924 99.90000% : 40934.794us 00:08:25.924 99.99000% : 41136.443us 00:08:25.924 99.99900% : 41136.443us 00:08:25.924 99.99990% : 41136.443us 00:08:25.924 99.99999% : 41136.443us 00:08:25.924 00:08:25.924 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:25.924 ================================================================================= 00:08:25.924 1.00000% : 11241.945us 00:08:25.924 10.00000% : 15022.868us 00:08:25.924 25.00000% : 15930.289us 00:08:25.924 50.00000% : 17341.834us 00:08:25.924 75.00000% : 18955.028us 00:08:25.924 90.00000% : 21072.345us 00:08:25.924 95.00000% : 21778.117us 00:08:25.924 98.00000% : 23794.609us 00:08:25.924 99.00000% : 31053.982us 00:08:25.924 99.50000% : 39926.548us 00:08:25.924 99.90000% : 40531.495us 00:08:25.924 99.99000% : 40531.495us 00:08:25.924 99.99900% : 40531.495us 00:08:25.924 99.99990% : 40531.495us 00:08:25.924 99.99999% : 40531.495us 00:08:25.924 00:08:25.924 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:25.924 ================================================================================= 00:08:25.924 1.00000% : 10334.523us 00:08:25.924 10.00000% : 15022.868us 00:08:25.924 25.00000% : 15930.289us 00:08:25.924 50.00000% : 17442.658us 00:08:25.924 75.00000% : 18955.028us 00:08:25.924 90.00000% : 21072.345us 00:08:25.924 95.00000% : 21778.117us 00:08:25.924 98.00000% : 23189.662us 00:08:25.924 99.00000% : 31658.929us 00:08:25.924 99.50000% : 39523.249us 00:08:25.924 99.90000% : 40128.197us 00:08:25.924 99.99000% : 40329.846us 00:08:25.924 99.99900% : 40329.846us 00:08:25.924 99.99990% : 40329.846us 00:08:25.924 99.99999% : 40329.846us 00:08:25.924 00:08:25.924 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:25.924 ============================================================================== 00:08:25.924 Range in us Cumulative IO count 00:08:25.924 9275.865 - 9326.277: 0.0277% ( 2) 00:08:25.924 9326.277 - 9376.689: 0.0415% ( 1) 00:08:25.924 9376.689 - 9427.102: 0.0830% ( 3) 00:08:25.924 9427.102 - 9477.514: 0.1106% ( 2) 00:08:25.924 9477.514 - 9527.926: 0.1798% ( 5) 00:08:25.924 9527.926 - 9578.338: 0.1936% ( 1) 00:08:25.924 9578.338 - 9628.751: 0.2351% ( 3) 00:08:25.924 9628.751 - 9679.163: 0.2627% ( 2) 00:08:25.924 9679.163 - 9729.575: 0.3042% ( 3) 00:08:25.924 9729.575 - 9779.988: 0.3319% ( 2) 00:08:25.924 9830.400 - 9880.812: 0.4701% ( 10) 00:08:25.924 9931.225 - 9981.637: 0.5254% ( 4) 00:08:25.924 9981.637 - 10032.049: 0.5531% ( 2) 00:08:25.924 10032.049 - 10082.462: 0.6222% ( 5) 00:08:25.924 10082.462 - 10132.874: 0.6499% ( 2) 00:08:25.924 10132.874 - 10183.286: 0.6914% ( 3) 00:08:25.924 10183.286 - 10233.698: 0.7467% ( 4) 00:08:25.924 10233.698 - 10284.111: 0.7743% ( 2) 00:08:25.924 10284.111 - 10334.523: 0.8296% ( 4) 00:08:25.924 10334.523 - 10384.935: 0.8711% ( 3) 00:08:25.924 10384.935 - 10435.348: 0.8850% ( 1) 00:08:25.924 13510.498 - 13611.323: 0.9541% ( 5) 00:08:25.924 13611.323 - 13712.148: 1.0232% ( 5) 00:08:25.924 13712.148 - 13812.972: 1.2168% ( 14) 00:08:25.924 13812.972 - 13913.797: 1.4381% ( 16) 00:08:25.924 13913.797 - 14014.622: 1.8252% ( 28) 00:08:25.924 14014.622 - 14115.446: 2.1847% ( 26) 00:08:25.924 14115.446 - 14216.271: 2.7102% ( 38) 00:08:25.924 14216.271 - 14317.095: 3.3877% ( 49) 00:08:25.924 14317.095 - 14417.920: 4.0653% ( 49) 00:08:25.924 14417.920 - 14518.745: 4.8673% ( 58) 00:08:25.924 14518.745 - 14619.569: 5.7384% ( 63) 00:08:25.924 14619.569 - 14720.394: 6.8446% ( 80) 00:08:25.924 14720.394 - 14821.218: 8.0890% ( 90) 00:08:25.924 14821.218 - 14922.043: 9.4027% ( 95) 00:08:25.924 14922.043 - 15022.868: 10.7577% ( 98) 00:08:25.924 15022.868 - 15123.692: 12.2788% ( 110) 00:08:25.924 15123.692 - 15224.517: 13.9104% ( 118) 00:08:25.924 15224.517 - 15325.342: 15.4591% ( 112) 00:08:25.924 15325.342 - 15426.166: 17.1598% ( 123) 00:08:25.924 15426.166 - 15526.991: 18.8883% ( 125) 00:08:25.924 15526.991 - 15627.815: 20.3263% ( 104) 00:08:25.924 15627.815 - 15728.640: 22.2069% ( 136) 00:08:25.924 15728.640 - 15829.465: 23.9629% ( 127) 00:08:25.924 15829.465 - 15930.289: 26.0647% ( 152) 00:08:25.924 15930.289 - 16031.114: 27.6410% ( 114) 00:08:25.924 16031.114 - 16131.938: 29.4110% ( 128) 00:08:25.924 16131.938 - 16232.763: 31.0564% ( 119) 00:08:25.924 16232.763 - 16333.588: 32.5498% ( 108) 00:08:25.924 16333.588 - 16434.412: 33.9187% ( 99) 00:08:25.924 16434.412 - 16535.237: 35.3567% ( 104) 00:08:25.924 16535.237 - 16636.062: 36.6289% ( 92) 00:08:25.924 16636.062 - 16736.886: 37.8733% ( 90) 00:08:25.924 16736.886 - 16837.711: 39.6018% ( 125) 00:08:25.924 16837.711 - 16938.535: 41.1504% ( 112) 00:08:25.924 16938.535 - 17039.360: 42.8097% ( 120) 00:08:25.925 17039.360 - 17140.185: 44.5520% ( 126) 00:08:25.925 17140.185 - 17241.009: 46.3772% ( 132) 00:08:25.925 17241.009 - 17341.834: 48.2439% ( 135) 00:08:25.925 17341.834 - 17442.658: 50.1521% ( 138) 00:08:25.925 17442.658 - 17543.483: 51.9358% ( 129) 00:08:25.925 17543.483 - 17644.308: 53.8855% ( 141) 00:08:25.925 17644.308 - 17745.132: 55.9043% ( 146) 00:08:25.925 17745.132 - 17845.957: 57.7434% ( 133) 00:08:25.925 17845.957 - 17946.782: 59.4856% ( 126) 00:08:25.925 17946.782 - 18047.606: 61.4076% ( 139) 00:08:25.925 18047.606 - 18148.431: 63.0531% ( 119) 00:08:25.925 18148.431 - 18249.255: 64.7124% ( 120) 00:08:25.925 18249.255 - 18350.080: 66.1366% ( 103) 00:08:25.925 18350.080 - 18450.905: 67.6715% ( 111) 00:08:25.925 18450.905 - 18551.729: 68.9574% ( 93) 00:08:25.925 18551.729 - 18652.554: 70.1604% ( 87) 00:08:25.925 18652.554 - 18753.378: 71.5293% ( 99) 00:08:25.925 18753.378 - 18854.203: 72.6493% ( 81) 00:08:25.925 18854.203 - 18955.028: 74.0874% ( 104) 00:08:25.925 18955.028 - 19055.852: 75.1936% ( 80) 00:08:25.925 19055.852 - 19156.677: 76.1615% ( 70) 00:08:25.925 19156.677 - 19257.502: 77.1018% ( 68) 00:08:25.925 19257.502 - 19358.326: 78.0144% ( 66) 00:08:25.925 19358.326 - 19459.151: 78.8440% ( 60) 00:08:25.925 19459.151 - 19559.975: 79.6322% ( 57) 00:08:25.925 19559.975 - 19660.800: 80.4480% ( 59) 00:08:25.925 19660.800 - 19761.625: 81.3744% ( 67) 00:08:25.925 19761.625 - 19862.449: 82.0243% ( 47) 00:08:25.925 19862.449 - 19963.274: 83.0614% ( 75) 00:08:25.925 19963.274 - 20064.098: 83.7389% ( 49) 00:08:25.925 20064.098 - 20164.923: 84.6101% ( 63) 00:08:25.925 20164.923 - 20265.748: 85.2185% ( 44) 00:08:25.925 20265.748 - 20366.572: 86.1726% ( 69) 00:08:25.925 20366.572 - 20467.397: 86.9884% ( 59) 00:08:25.925 20467.397 - 20568.222: 87.7074% ( 52) 00:08:25.925 20568.222 - 20669.046: 88.3850% ( 49) 00:08:25.925 20669.046 - 20769.871: 89.1593% ( 56) 00:08:25.925 20769.871 - 20870.695: 89.9613% ( 58) 00:08:25.925 20870.695 - 20971.520: 90.8048% ( 61) 00:08:25.925 20971.520 - 21072.345: 91.3440% ( 39) 00:08:25.925 21072.345 - 21173.169: 91.9386% ( 43) 00:08:25.925 21173.169 - 21273.994: 92.4917% ( 40) 00:08:25.925 21273.994 - 21374.818: 92.9480% ( 33) 00:08:25.925 21374.818 - 21475.643: 93.6532% ( 51) 00:08:25.925 21475.643 - 21576.468: 94.1787% ( 38) 00:08:25.925 21576.468 - 21677.292: 94.5658% ( 28) 00:08:25.925 21677.292 - 21778.117: 95.0221% ( 33) 00:08:25.925 21778.117 - 21878.942: 95.6167% ( 43) 00:08:25.925 21878.942 - 21979.766: 95.9071% ( 21) 00:08:25.925 21979.766 - 22080.591: 96.3910% ( 35) 00:08:25.925 22080.591 - 22181.415: 96.6538% ( 19) 00:08:25.925 22181.415 - 22282.240: 96.8612% ( 15) 00:08:25.925 22282.240 - 22383.065: 96.9718% ( 8) 00:08:25.925 22383.065 - 22483.889: 97.0824% ( 8) 00:08:25.925 22483.889 - 22584.714: 97.2622% ( 13) 00:08:25.925 22584.714 - 22685.538: 97.3451% ( 6) 00:08:25.925 22685.538 - 22786.363: 97.4419% ( 7) 00:08:25.925 22786.363 - 22887.188: 97.4972% ( 4) 00:08:25.925 22887.188 - 22988.012: 97.5802% ( 6) 00:08:25.925 22988.012 - 23088.837: 97.6632% ( 6) 00:08:25.925 23088.837 - 23189.662: 97.7046% ( 3) 00:08:25.925 23189.662 - 23290.486: 97.7461% ( 3) 00:08:25.925 23290.486 - 23391.311: 97.7738% ( 2) 00:08:25.925 23391.311 - 23492.135: 97.8014% ( 2) 00:08:25.925 23492.135 - 23592.960: 97.8567% ( 4) 00:08:25.925 23592.960 - 23693.785: 97.8982% ( 3) 00:08:25.925 23693.785 - 23794.609: 97.9259% ( 2) 00:08:25.925 23794.609 - 23895.434: 97.9950% ( 5) 00:08:25.925 23996.258 - 24097.083: 98.0503% ( 4) 00:08:25.925 24197.908 - 24298.732: 98.1471% ( 7) 00:08:25.925 24399.557 - 24500.382: 98.2301% ( 6) 00:08:25.925 26819.348 - 27020.997: 98.2854% ( 4) 00:08:25.925 27020.997 - 27222.646: 98.3545% ( 5) 00:08:25.925 27222.646 - 27424.295: 98.4375% ( 6) 00:08:25.925 27424.295 - 27625.945: 98.5066% ( 5) 00:08:25.925 27625.945 - 27827.594: 98.5758% ( 5) 00:08:25.925 27827.594 - 28029.243: 98.6587% ( 6) 00:08:25.925 28029.243 - 28230.892: 98.7140% ( 4) 00:08:25.925 28230.892 - 28432.542: 98.8247% ( 8) 00:08:25.925 28432.542 - 28634.191: 98.8938% ( 5) 00:08:25.925 28634.191 - 28835.840: 98.9768% ( 6) 00:08:25.925 28835.840 - 29037.489: 99.0736% ( 7) 00:08:25.925 29037.489 - 29239.138: 99.1150% ( 3) 00:08:25.925 39321.600 - 39523.249: 99.1980% ( 6) 00:08:25.925 39523.249 - 39724.898: 99.2533% ( 4) 00:08:25.925 39724.898 - 39926.548: 99.3225% ( 5) 00:08:25.925 39926.548 - 40128.197: 99.3916% ( 5) 00:08:25.925 40128.197 - 40329.846: 99.4746% ( 6) 00:08:25.925 40329.846 - 40531.495: 99.5575% ( 6) 00:08:25.925 40531.495 - 40733.145: 99.6267% ( 5) 00:08:25.925 40733.145 - 40934.794: 99.7096% ( 6) 00:08:25.925 40934.794 - 41136.443: 99.7788% ( 5) 00:08:25.925 41136.443 - 41338.092: 99.8617% ( 6) 00:08:25.925 41338.092 - 41539.742: 99.9447% ( 6) 00:08:25.925 41539.742 - 41741.391: 99.9723% ( 2) 00:08:25.925 41741.391 - 41943.040: 100.0000% ( 2) 00:08:25.925 00:08:25.925 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:25.925 ============================================================================== 00:08:25.925 Range in us Cumulative IO count 00:08:25.925 8670.917 - 8721.329: 0.0138% ( 1) 00:08:25.925 8721.329 - 8771.742: 0.0553% ( 3) 00:08:25.925 8771.742 - 8822.154: 0.0830% ( 2) 00:08:25.925 8822.154 - 8872.566: 0.1383% ( 4) 00:08:25.925 8872.566 - 8922.978: 0.1798% ( 3) 00:08:25.925 8922.978 - 8973.391: 0.2212% ( 3) 00:08:25.925 8973.391 - 9023.803: 0.2627% ( 3) 00:08:25.925 9023.803 - 9074.215: 0.3180% ( 4) 00:08:25.925 9074.215 - 9124.628: 0.3733% ( 4) 00:08:25.925 9124.628 - 9175.040: 0.4148% ( 3) 00:08:25.925 9175.040 - 9225.452: 0.4701% ( 4) 00:08:25.925 9225.452 - 9275.865: 0.5254% ( 4) 00:08:25.925 9275.865 - 9326.277: 0.5808% ( 4) 00:08:25.925 9326.277 - 9376.689: 0.6084% ( 2) 00:08:25.925 9376.689 - 9427.102: 0.6637% ( 4) 00:08:25.925 9427.102 - 9477.514: 0.7190% ( 4) 00:08:25.925 9477.514 - 9527.926: 0.7743% ( 4) 00:08:25.925 9527.926 - 9578.338: 0.8158% ( 3) 00:08:25.925 9578.338 - 9628.751: 0.8711% ( 4) 00:08:25.925 9628.751 - 9679.163: 0.8850% ( 1) 00:08:25.925 13409.674 - 13510.498: 0.8988% ( 1) 00:08:25.925 13510.498 - 13611.323: 1.0094% ( 8) 00:08:25.925 13611.323 - 13712.148: 1.1200% ( 8) 00:08:25.925 13712.148 - 13812.972: 1.2445% ( 9) 00:08:25.925 13812.972 - 13913.797: 1.5348% ( 21) 00:08:25.925 13913.797 - 14014.622: 1.9912% ( 33) 00:08:25.925 14014.622 - 14115.446: 2.4751% ( 35) 00:08:25.925 14115.446 - 14216.271: 2.9729% ( 36) 00:08:25.925 14216.271 - 14317.095: 3.6090% ( 46) 00:08:25.925 14317.095 - 14417.920: 4.2450% ( 46) 00:08:25.925 14417.920 - 14518.745: 4.8534% ( 44) 00:08:25.925 14518.745 - 14619.569: 5.4480% ( 43) 00:08:25.925 14619.569 - 14720.394: 6.0564% ( 44) 00:08:25.925 14720.394 - 14821.218: 6.9690% ( 66) 00:08:25.925 14821.218 - 14922.043: 8.0752% ( 80) 00:08:25.925 14922.043 - 15022.868: 9.1814% ( 80) 00:08:25.925 15022.868 - 15123.692: 10.3706% ( 86) 00:08:25.925 15123.692 - 15224.517: 11.9192% ( 112) 00:08:25.925 15224.517 - 15325.342: 13.7721% ( 134) 00:08:25.925 15325.342 - 15426.166: 15.4591% ( 122) 00:08:25.925 15426.166 - 15526.991: 17.2428% ( 129) 00:08:25.925 15526.991 - 15627.815: 19.2201% ( 143) 00:08:25.925 15627.815 - 15728.640: 21.2804% ( 149) 00:08:25.926 15728.640 - 15829.465: 23.5205% ( 162) 00:08:25.926 15829.465 - 15930.289: 25.6361% ( 153) 00:08:25.926 15930.289 - 16031.114: 27.8761% ( 162) 00:08:25.926 16031.114 - 16131.938: 29.7566% ( 136) 00:08:25.926 16131.938 - 16232.763: 31.2915% ( 111) 00:08:25.926 16232.763 - 16333.588: 33.0061% ( 124) 00:08:25.926 16333.588 - 16434.412: 34.5824% ( 114) 00:08:25.926 16434.412 - 16535.237: 36.2832% ( 123) 00:08:25.926 16535.237 - 16636.062: 37.8042% ( 110) 00:08:25.926 16636.062 - 16736.886: 39.3805% ( 114) 00:08:25.926 16736.886 - 16837.711: 40.9154% ( 111) 00:08:25.926 16837.711 - 16938.535: 42.3396% ( 103) 00:08:25.926 16938.535 - 17039.360: 43.8468% ( 109) 00:08:25.926 17039.360 - 17140.185: 45.7135% ( 135) 00:08:25.926 17140.185 - 17241.009: 47.5525% ( 133) 00:08:25.926 17241.009 - 17341.834: 49.3778% ( 132) 00:08:25.926 17341.834 - 17442.658: 51.0785% ( 123) 00:08:25.926 17442.658 - 17543.483: 52.7655% ( 122) 00:08:25.926 17543.483 - 17644.308: 54.5077% ( 126) 00:08:25.926 17644.308 - 17745.132: 56.2915% ( 129) 00:08:25.926 17745.132 - 17845.957: 58.4347% ( 155) 00:08:25.926 17845.957 - 17946.782: 60.4535% ( 146) 00:08:25.926 17946.782 - 18047.606: 62.5000% ( 148) 00:08:25.926 18047.606 - 18148.431: 64.4358% ( 140) 00:08:25.926 18148.431 - 18249.255: 65.9983% ( 113) 00:08:25.926 18249.255 - 18350.080: 67.4226% ( 103) 00:08:25.926 18350.080 - 18450.905: 68.7777% ( 98) 00:08:25.926 18450.905 - 18551.729: 70.0221% ( 90) 00:08:25.926 18551.729 - 18652.554: 71.2804% ( 91) 00:08:25.926 18652.554 - 18753.378: 72.4281% ( 83) 00:08:25.926 18753.378 - 18854.203: 73.6173% ( 86) 00:08:25.926 18854.203 - 18955.028: 74.7788% ( 84) 00:08:25.926 18955.028 - 19055.852: 75.8711% ( 79) 00:08:25.926 19055.852 - 19156.677: 76.8252% ( 69) 00:08:25.926 19156.677 - 19257.502: 77.7102% ( 64) 00:08:25.926 19257.502 - 19358.326: 78.5537% ( 61) 00:08:25.926 19358.326 - 19459.151: 79.2450% ( 50) 00:08:25.926 19459.151 - 19559.975: 79.9502% ( 51) 00:08:25.926 19559.975 - 19660.800: 80.5448% ( 43) 00:08:25.926 19660.800 - 19761.625: 81.4159% ( 63) 00:08:25.926 19761.625 - 19862.449: 82.2594% ( 61) 00:08:25.926 19862.449 - 19963.274: 83.1720% ( 66) 00:08:25.926 19963.274 - 20064.098: 84.0293% ( 62) 00:08:25.926 20064.098 - 20164.923: 84.7760% ( 54) 00:08:25.926 20164.923 - 20265.748: 85.6056% ( 60) 00:08:25.926 20265.748 - 20366.572: 86.4906% ( 64) 00:08:25.926 20366.572 - 20467.397: 87.3617% ( 63) 00:08:25.926 20467.397 - 20568.222: 88.2052% ( 61) 00:08:25.926 20568.222 - 20669.046: 89.0902% ( 64) 00:08:25.926 20669.046 - 20769.871: 89.9751% ( 64) 00:08:25.926 20769.871 - 20870.695: 90.6803% ( 51) 00:08:25.926 20870.695 - 20971.520: 91.3993% ( 52) 00:08:25.926 20971.520 - 21072.345: 92.0354% ( 46) 00:08:25.926 21072.345 - 21173.169: 92.6715% ( 46) 00:08:25.926 21173.169 - 21273.994: 93.2937% ( 45) 00:08:25.926 21273.994 - 21374.818: 93.6947% ( 29) 00:08:25.926 21374.818 - 21475.643: 94.0957% ( 29) 00:08:25.926 21475.643 - 21576.468: 94.5105% ( 30) 00:08:25.926 21576.468 - 21677.292: 94.8977% ( 28) 00:08:25.926 21677.292 - 21778.117: 95.2848% ( 28) 00:08:25.926 21778.117 - 21878.942: 95.6444% ( 26) 00:08:25.926 21878.942 - 21979.766: 96.0315% ( 28) 00:08:25.926 21979.766 - 22080.591: 96.3357% ( 22) 00:08:25.926 22080.591 - 22181.415: 96.5985% ( 19) 00:08:25.926 22181.415 - 22282.240: 96.7644% ( 12) 00:08:25.926 22282.240 - 22383.065: 96.9027% ( 10) 00:08:25.926 22383.065 - 22483.889: 97.0271% ( 9) 00:08:25.926 22483.889 - 22584.714: 97.0824% ( 4) 00:08:25.926 22584.714 - 22685.538: 97.1515% ( 5) 00:08:25.926 22685.538 - 22786.363: 97.2069% ( 4) 00:08:25.926 22786.363 - 22887.188: 97.2483% ( 3) 00:08:25.926 22887.188 - 22988.012: 97.3037% ( 4) 00:08:25.926 22988.012 - 23088.837: 97.3590% ( 4) 00:08:25.926 23088.837 - 23189.662: 97.4143% ( 4) 00:08:25.926 23189.662 - 23290.486: 97.4558% ( 3) 00:08:25.926 23290.486 - 23391.311: 97.5111% ( 4) 00:08:25.926 23391.311 - 23492.135: 97.5664% ( 4) 00:08:25.926 23492.135 - 23592.960: 97.6217% ( 4) 00:08:25.926 23592.960 - 23693.785: 97.6770% ( 4) 00:08:25.926 23693.785 - 23794.609: 97.7323% ( 4) 00:08:25.926 23794.609 - 23895.434: 97.7876% ( 4) 00:08:25.926 23895.434 - 23996.258: 97.8291% ( 3) 00:08:25.926 23996.258 - 24097.083: 97.8706% ( 3) 00:08:25.926 24097.083 - 24197.908: 97.9259% ( 4) 00:08:25.926 24197.908 - 24298.732: 97.9812% ( 4) 00:08:25.926 24298.732 - 24399.557: 98.0365% ( 4) 00:08:25.926 24399.557 - 24500.382: 98.0780% ( 3) 00:08:25.926 24500.382 - 24601.206: 98.1333% ( 4) 00:08:25.926 24601.206 - 24702.031: 98.1886% ( 4) 00:08:25.926 24702.031 - 24802.855: 98.2301% ( 3) 00:08:25.926 28432.542 - 28634.191: 98.2439% ( 1) 00:08:25.926 28634.191 - 28835.840: 98.3545% ( 8) 00:08:25.926 28835.840 - 29037.489: 98.4513% ( 7) 00:08:25.926 29037.489 - 29239.138: 98.5619% ( 8) 00:08:25.926 29239.138 - 29440.788: 98.6449% ( 6) 00:08:25.926 29440.788 - 29642.437: 98.7555% ( 8) 00:08:25.926 29642.437 - 29844.086: 98.8662% ( 8) 00:08:25.926 29844.086 - 30045.735: 98.9491% ( 6) 00:08:25.926 30045.735 - 30247.385: 99.0459% ( 7) 00:08:25.926 30247.385 - 30449.034: 99.1150% ( 5) 00:08:25.926 39321.600 - 39523.249: 99.1980% ( 6) 00:08:25.926 39523.249 - 39724.898: 99.2671% ( 5) 00:08:25.926 39724.898 - 39926.548: 99.3639% ( 7) 00:08:25.926 39926.548 - 40128.197: 99.4469% ( 6) 00:08:25.926 40128.197 - 40329.846: 99.5299% ( 6) 00:08:25.926 40329.846 - 40531.495: 99.6267% ( 7) 00:08:25.926 40531.495 - 40733.145: 99.6958% ( 5) 00:08:25.926 40733.145 - 40934.794: 99.8064% ( 8) 00:08:25.926 40934.794 - 41136.443: 99.9032% ( 7) 00:08:25.926 41136.443 - 41338.092: 99.9862% ( 6) 00:08:25.926 41338.092 - 41539.742: 100.0000% ( 1) 00:08:25.926 00:08:25.926 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:25.926 ============================================================================== 00:08:25.926 Range in us Cumulative IO count 00:08:25.926 7259.372 - 7309.785: 0.0691% ( 5) 00:08:25.926 7309.785 - 7360.197: 0.1244% ( 4) 00:08:25.926 7360.197 - 7410.609: 0.1798% ( 4) 00:08:25.926 7410.609 - 7461.022: 0.2212% ( 3) 00:08:25.926 7461.022 - 7511.434: 0.2765% ( 4) 00:08:25.926 7511.434 - 7561.846: 0.3457% ( 5) 00:08:25.926 7561.846 - 7612.258: 0.4010% ( 4) 00:08:25.926 7612.258 - 7662.671: 0.4563% ( 4) 00:08:25.926 7662.671 - 7713.083: 0.5116% ( 4) 00:08:25.926 7713.083 - 7763.495: 0.5531% ( 3) 00:08:25.926 7763.495 - 7813.908: 0.6222% ( 5) 00:08:25.926 7813.908 - 7864.320: 0.6775% ( 4) 00:08:25.926 7864.320 - 7914.732: 0.7329% ( 4) 00:08:25.926 7914.732 - 7965.145: 0.7882% ( 4) 00:08:25.926 7965.145 - 8015.557: 0.8435% ( 4) 00:08:25.926 8015.557 - 8065.969: 0.8711% ( 2) 00:08:25.926 8065.969 - 8116.382: 0.8850% ( 1) 00:08:25.926 12804.726 - 12855.138: 0.8988% ( 1) 00:08:25.926 12855.138 - 12905.551: 0.9264% ( 2) 00:08:25.926 12905.551 - 13006.375: 0.9956% ( 5) 00:08:25.926 13006.375 - 13107.200: 1.1477% ( 11) 00:08:25.926 13107.200 - 13208.025: 1.3274% ( 13) 00:08:25.926 13208.025 - 13308.849: 1.5625% ( 17) 00:08:25.926 13308.849 - 13409.674: 1.7008% ( 10) 00:08:25.926 13409.674 - 13510.498: 1.9082% ( 15) 00:08:25.926 13510.498 - 13611.323: 2.1709% ( 19) 00:08:25.927 13611.323 - 13712.148: 2.4198% ( 18) 00:08:25.927 13712.148 - 13812.972: 2.6687% ( 18) 00:08:25.927 13812.972 - 13913.797: 2.9176% ( 18) 00:08:25.927 13913.797 - 14014.622: 3.1112% ( 14) 00:08:25.927 14014.622 - 14115.446: 3.4430% ( 24) 00:08:25.927 14115.446 - 14216.271: 3.8164% ( 27) 00:08:25.927 14216.271 - 14317.095: 4.2727% ( 33) 00:08:25.927 14317.095 - 14417.920: 4.9226% ( 47) 00:08:25.927 14417.920 - 14518.745: 5.6831% ( 55) 00:08:25.927 14518.745 - 14619.569: 6.4298% ( 54) 00:08:25.927 14619.569 - 14720.394: 7.2317% ( 58) 00:08:25.927 14720.394 - 14821.218: 8.1029% ( 63) 00:08:25.927 14821.218 - 14922.043: 9.3888% ( 93) 00:08:25.927 14922.043 - 15022.868: 10.6333% ( 90) 00:08:25.927 15022.868 - 15123.692: 11.9192% ( 93) 00:08:25.927 15123.692 - 15224.517: 13.2882% ( 99) 00:08:25.927 15224.517 - 15325.342: 15.0857% ( 130) 00:08:25.927 15325.342 - 15426.166: 16.8280% ( 126) 00:08:25.927 15426.166 - 15526.991: 18.4735% ( 119) 00:08:25.927 15526.991 - 15627.815: 20.0498% ( 114) 00:08:25.927 15627.815 - 15728.640: 21.8473% ( 130) 00:08:25.927 15728.640 - 15829.465: 23.7417% ( 137) 00:08:25.927 15829.465 - 15930.289: 25.6637% ( 139) 00:08:25.927 15930.289 - 16031.114: 27.9867% ( 168) 00:08:25.927 16031.114 - 16131.938: 30.1715% ( 158) 00:08:25.927 16131.938 - 16232.763: 32.1073% ( 140) 00:08:25.927 16232.763 - 16333.588: 33.9463% ( 133) 00:08:25.927 16333.588 - 16434.412: 35.7163% ( 128) 00:08:25.927 16434.412 - 16535.237: 37.6106% ( 137) 00:08:25.927 16535.237 - 16636.062: 39.4358% ( 132) 00:08:25.927 16636.062 - 16736.886: 41.0675% ( 118) 00:08:25.927 16736.886 - 16837.711: 42.5747% ( 109) 00:08:25.927 16837.711 - 16938.535: 44.0680% ( 108) 00:08:25.927 16938.535 - 17039.360: 45.3263% ( 91) 00:08:25.927 17039.360 - 17140.185: 46.9027% ( 114) 00:08:25.927 17140.185 - 17241.009: 48.6034% ( 123) 00:08:25.927 17241.009 - 17341.834: 50.3733% ( 128) 00:08:25.927 17341.834 - 17442.658: 52.1986% ( 132) 00:08:25.927 17442.658 - 17543.483: 53.9685% ( 128) 00:08:25.927 17543.483 - 17644.308: 55.6831% ( 124) 00:08:25.927 17644.308 - 17745.132: 57.3838% ( 123) 00:08:25.927 17745.132 - 17845.957: 59.1676% ( 129) 00:08:25.927 17845.957 - 17946.782: 60.8960% ( 125) 00:08:25.927 17946.782 - 18047.606: 62.7351% ( 133) 00:08:25.927 18047.606 - 18148.431: 64.3805% ( 119) 00:08:25.927 18148.431 - 18249.255: 65.9015% ( 110) 00:08:25.927 18249.255 - 18350.080: 67.4779% ( 114) 00:08:25.927 18350.080 - 18450.905: 68.8744% ( 101) 00:08:25.927 18450.905 - 18551.729: 70.0498% ( 85) 00:08:25.927 18551.729 - 18652.554: 71.2666% ( 88) 00:08:25.927 18652.554 - 18753.378: 72.5387% ( 92) 00:08:25.927 18753.378 - 18854.203: 73.8662% ( 96) 00:08:25.927 18854.203 - 18955.028: 74.8202% ( 69) 00:08:25.927 18955.028 - 19055.852: 75.8435% ( 74) 00:08:25.927 19055.852 - 19156.677: 76.6316% ( 57) 00:08:25.927 19156.677 - 19257.502: 77.4751% ( 61) 00:08:25.927 19257.502 - 19358.326: 78.2633% ( 57) 00:08:25.927 19358.326 - 19459.151: 79.0514% ( 57) 00:08:25.927 19459.151 - 19559.975: 79.8258% ( 56) 00:08:25.927 19559.975 - 19660.800: 80.5725% ( 54) 00:08:25.927 19660.800 - 19761.625: 81.3606% ( 57) 00:08:25.927 19761.625 - 19862.449: 82.0658% ( 51) 00:08:25.927 19862.449 - 19963.274: 82.7434% ( 49) 00:08:25.927 19963.274 - 20064.098: 83.6836% ( 68) 00:08:25.927 20064.098 - 20164.923: 84.4856% ( 58) 00:08:25.927 20164.923 - 20265.748: 85.2738% ( 57) 00:08:25.927 20265.748 - 20366.572: 86.0896% ( 59) 00:08:25.927 20366.572 - 20467.397: 86.6842% ( 43) 00:08:25.927 20467.397 - 20568.222: 87.3341% ( 47) 00:08:25.927 20568.222 - 20669.046: 88.0669% ( 53) 00:08:25.927 20669.046 - 20769.871: 88.7306% ( 48) 00:08:25.927 20769.871 - 20870.695: 89.3529% ( 45) 00:08:25.927 20870.695 - 20971.520: 89.9889% ( 46) 00:08:25.927 20971.520 - 21072.345: 90.7356% ( 54) 00:08:25.927 21072.345 - 21173.169: 91.3440% ( 44) 00:08:25.927 21173.169 - 21273.994: 91.8556% ( 37) 00:08:25.927 21273.994 - 21374.818: 92.3396% ( 35) 00:08:25.927 21374.818 - 21475.643: 92.9204% ( 42) 00:08:25.927 21475.643 - 21576.468: 93.5426% ( 45) 00:08:25.927 21576.468 - 21677.292: 94.0819% ( 39) 00:08:25.927 21677.292 - 21778.117: 94.6350% ( 40) 00:08:25.927 21778.117 - 21878.942: 95.0913% ( 33) 00:08:25.927 21878.942 - 21979.766: 95.4923% ( 29) 00:08:25.927 21979.766 - 22080.591: 95.8241% ( 24) 00:08:25.927 22080.591 - 22181.415: 96.1698% ( 25) 00:08:25.927 22181.415 - 22282.240: 96.4187% ( 18) 00:08:25.927 22282.240 - 22383.065: 96.5846% ( 12) 00:08:25.927 22383.065 - 22483.889: 96.7644% ( 13) 00:08:25.927 22483.889 - 22584.714: 96.8888% ( 9) 00:08:25.927 22584.714 - 22685.538: 96.9441% ( 4) 00:08:25.927 22685.538 - 22786.363: 96.9994% ( 4) 00:08:25.927 22786.363 - 22887.188: 97.0548% ( 4) 00:08:25.927 22887.188 - 22988.012: 97.1101% ( 4) 00:08:25.927 22988.012 - 23088.837: 97.1515% ( 3) 00:08:25.927 23088.837 - 23189.662: 97.1930% ( 3) 00:08:25.927 23189.662 - 23290.486: 97.2483% ( 4) 00:08:25.927 23290.486 - 23391.311: 97.2898% ( 3) 00:08:25.927 23391.311 - 23492.135: 97.3313% ( 3) 00:08:25.927 23492.135 - 23592.960: 97.3451% ( 1) 00:08:25.927 23996.258 - 24097.083: 97.3728% ( 2) 00:08:25.927 24097.083 - 24197.908: 97.4143% ( 3) 00:08:25.927 24197.908 - 24298.732: 97.4558% ( 3) 00:08:25.927 24298.732 - 24399.557: 97.4972% ( 3) 00:08:25.927 24399.557 - 24500.382: 97.5387% ( 3) 00:08:25.927 24500.382 - 24601.206: 97.5940% ( 4) 00:08:25.927 24601.206 - 24702.031: 97.6493% ( 4) 00:08:25.927 24702.031 - 24802.855: 97.7046% ( 4) 00:08:25.927 24802.855 - 24903.680: 97.7461% ( 3) 00:08:25.927 24903.680 - 25004.505: 97.7876% ( 3) 00:08:25.927 25004.505 - 25105.329: 97.8291% ( 3) 00:08:25.927 25105.329 - 25206.154: 97.8706% ( 3) 00:08:25.927 25206.154 - 25306.978: 97.9121% ( 3) 00:08:25.927 25306.978 - 25407.803: 97.9535% ( 3) 00:08:25.927 25407.803 - 25508.628: 97.9950% ( 3) 00:08:25.927 25508.628 - 25609.452: 98.0365% ( 3) 00:08:25.927 25609.452 - 25710.277: 98.0780% ( 3) 00:08:25.927 25710.277 - 25811.102: 98.1333% ( 4) 00:08:25.927 25811.102 - 26012.751: 98.2163% ( 6) 00:08:25.927 26012.751 - 26214.400: 98.2301% ( 1) 00:08:25.927 29239.138 - 29440.788: 98.2439% ( 1) 00:08:25.927 29440.788 - 29642.437: 98.2992% ( 4) 00:08:25.927 29642.437 - 29844.086: 98.3822% ( 6) 00:08:25.927 29844.086 - 30045.735: 98.4652% ( 6) 00:08:25.927 30045.735 - 30247.385: 98.5343% ( 5) 00:08:25.927 30247.385 - 30449.034: 98.6311% ( 7) 00:08:25.927 30449.034 - 30650.683: 98.7140% ( 6) 00:08:25.927 30650.683 - 30852.332: 98.8108% ( 7) 00:08:25.927 30852.332 - 31053.982: 98.9076% ( 7) 00:08:25.927 31053.982 - 31255.631: 99.0044% ( 7) 00:08:25.927 31255.631 - 31457.280: 99.0874% ( 6) 00:08:25.927 31457.280 - 31658.929: 99.1150% ( 2) 00:08:25.927 39926.548 - 40128.197: 99.1842% ( 5) 00:08:25.927 40128.197 - 40329.846: 99.3086% ( 9) 00:08:25.927 40329.846 - 40531.495: 99.4607% ( 11) 00:08:25.927 40531.495 - 40733.145: 99.5852% ( 9) 00:08:25.927 40733.145 - 40934.794: 99.7235% ( 10) 00:08:25.927 40934.794 - 41136.443: 99.8617% ( 10) 00:08:25.927 41136.443 - 41338.092: 100.0000% ( 10) 00:08:25.927 00:08:25.927 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:25.928 ============================================================================== 00:08:25.928 Range in us Cumulative IO count 00:08:25.928 6427.569 - 6452.775: 0.0138% ( 1) 00:08:25.928 6452.775 - 6503.188: 0.0691% ( 4) 00:08:25.928 6503.188 - 6553.600: 0.1244% ( 4) 00:08:25.928 6553.600 - 6604.012: 0.1798% ( 4) 00:08:25.928 6604.012 - 6654.425: 0.2627% ( 6) 00:08:25.928 6654.425 - 6704.837: 0.3180% ( 4) 00:08:25.928 6704.837 - 6755.249: 0.3733% ( 4) 00:08:25.928 6755.249 - 6805.662: 0.4287% ( 4) 00:08:25.928 6805.662 - 6856.074: 0.4701% ( 3) 00:08:25.928 6856.074 - 6906.486: 0.5116% ( 3) 00:08:25.928 6906.486 - 6956.898: 0.5531% ( 3) 00:08:25.928 6956.898 - 7007.311: 0.5946% ( 3) 00:08:25.928 7007.311 - 7057.723: 0.6361% ( 3) 00:08:25.928 7057.723 - 7108.135: 0.6914% ( 4) 00:08:25.928 7108.135 - 7158.548: 0.7467% ( 4) 00:08:25.928 7158.548 - 7208.960: 0.8020% ( 4) 00:08:25.928 7208.960 - 7259.372: 0.8573% ( 4) 00:08:25.928 7259.372 - 7309.785: 0.8850% ( 2) 00:08:25.928 12098.954 - 12149.366: 0.9403% ( 4) 00:08:25.928 12149.366 - 12199.778: 0.9956% ( 4) 00:08:25.928 12199.778 - 12250.191: 1.0509% ( 4) 00:08:25.928 12250.191 - 12300.603: 1.0924% ( 3) 00:08:25.928 12300.603 - 12351.015: 1.1477% ( 4) 00:08:25.928 12351.015 - 12401.428: 1.2030% ( 4) 00:08:25.928 12401.428 - 12451.840: 1.2583% ( 4) 00:08:25.928 12451.840 - 12502.252: 1.3136% ( 4) 00:08:25.928 12502.252 - 12552.665: 1.3689% ( 4) 00:08:25.928 12552.665 - 12603.077: 1.3827% ( 1) 00:08:25.928 12603.077 - 12653.489: 1.4242% ( 3) 00:08:25.928 12653.489 - 12703.902: 1.4795% ( 4) 00:08:25.928 12703.902 - 12754.314: 1.5348% ( 4) 00:08:25.928 12754.314 - 12804.726: 1.5902% ( 4) 00:08:25.928 12804.726 - 12855.138: 1.6455% ( 4) 00:08:25.928 12855.138 - 12905.551: 1.7008% ( 4) 00:08:25.928 12905.551 - 13006.375: 1.7699% ( 5) 00:08:25.928 13409.674 - 13510.498: 1.7837% ( 1) 00:08:25.928 13510.498 - 13611.323: 1.8667% ( 6) 00:08:25.928 13611.323 - 13712.148: 1.9635% ( 7) 00:08:25.928 13712.148 - 13812.972: 2.1018% ( 10) 00:08:25.928 13812.972 - 13913.797: 2.2954% ( 14) 00:08:25.928 13913.797 - 14014.622: 2.5442% ( 18) 00:08:25.928 14014.622 - 14115.446: 2.7655% ( 16) 00:08:25.928 14115.446 - 14216.271: 3.0697% ( 22) 00:08:25.928 14216.271 - 14317.095: 3.3877% ( 23) 00:08:25.928 14317.095 - 14417.920: 4.0514% ( 48) 00:08:25.928 14417.920 - 14518.745: 4.6322% ( 42) 00:08:25.928 14518.745 - 14619.569: 5.2406% ( 44) 00:08:25.928 14619.569 - 14720.394: 5.9873% ( 54) 00:08:25.928 14720.394 - 14821.218: 6.9967% ( 73) 00:08:25.928 14821.218 - 14922.043: 8.3794% ( 100) 00:08:25.928 14922.043 - 15022.868: 9.4580% ( 78) 00:08:25.928 15022.868 - 15123.692: 10.6471% ( 86) 00:08:25.928 15123.692 - 15224.517: 11.9607% ( 95) 00:08:25.928 15224.517 - 15325.342: 13.4541% ( 108) 00:08:25.928 15325.342 - 15426.166: 15.3346% ( 136) 00:08:25.928 15426.166 - 15526.991: 17.2013% ( 135) 00:08:25.928 15526.991 - 15627.815: 19.0819% ( 136) 00:08:25.928 15627.815 - 15728.640: 20.7273% ( 119) 00:08:25.928 15728.640 - 15829.465: 22.6770% ( 141) 00:08:25.928 15829.465 - 15930.289: 24.7096% ( 147) 00:08:25.928 15930.289 - 16031.114: 26.9220% ( 160) 00:08:25.928 16031.114 - 16131.938: 28.9823% ( 149) 00:08:25.928 16131.938 - 16232.763: 31.1532% ( 157) 00:08:25.928 16232.763 - 16333.588: 33.4762% ( 168) 00:08:25.928 16333.588 - 16434.412: 35.3014% ( 132) 00:08:25.928 16434.412 - 16535.237: 37.1958% ( 137) 00:08:25.928 16535.237 - 16636.062: 38.7998% ( 116) 00:08:25.928 16636.062 - 16736.886: 40.3623% ( 113) 00:08:25.928 16736.886 - 16837.711: 42.1045% ( 126) 00:08:25.928 16837.711 - 16938.535: 43.8191% ( 124) 00:08:25.928 16938.535 - 17039.360: 45.5337% ( 124) 00:08:25.928 17039.360 - 17140.185: 47.3037% ( 128) 00:08:25.928 17140.185 - 17241.009: 49.1427% ( 133) 00:08:25.928 17241.009 - 17341.834: 50.7882% ( 119) 00:08:25.928 17341.834 - 17442.658: 52.5581% ( 128) 00:08:25.928 17442.658 - 17543.483: 54.5492% ( 144) 00:08:25.928 17543.483 - 17644.308: 56.5680% ( 146) 00:08:25.928 17644.308 - 17745.132: 58.5730% ( 145) 00:08:25.928 17745.132 - 17845.957: 60.6333% ( 149) 00:08:25.928 17845.957 - 17946.782: 62.5553% ( 139) 00:08:25.928 17946.782 - 18047.606: 64.4497% ( 137) 00:08:25.928 18047.606 - 18148.431: 66.2887% ( 133) 00:08:25.928 18148.431 - 18249.255: 67.9065% ( 117) 00:08:25.928 18249.255 - 18350.080: 69.1510% ( 90) 00:08:25.928 18350.080 - 18450.905: 70.3816% ( 89) 00:08:25.928 18450.905 - 18551.729: 71.5293% ( 83) 00:08:25.928 18551.729 - 18652.554: 72.7046% ( 85) 00:08:25.928 18652.554 - 18753.378: 73.5205% ( 59) 00:08:25.928 18753.378 - 18854.203: 74.2395% ( 52) 00:08:25.928 18854.203 - 18955.028: 74.9447% ( 51) 00:08:25.928 18955.028 - 19055.852: 75.8711% ( 67) 00:08:25.928 19055.852 - 19156.677: 76.6731% ( 58) 00:08:25.928 19156.677 - 19257.502: 77.5719% ( 65) 00:08:25.928 19257.502 - 19358.326: 78.5398% ( 70) 00:08:25.928 19358.326 - 19459.151: 79.3556% ( 59) 00:08:25.928 19459.151 - 19559.975: 80.0470% ( 50) 00:08:25.928 19559.975 - 19660.800: 80.7107% ( 48) 00:08:25.928 19660.800 - 19761.625: 81.3053% ( 43) 00:08:25.928 19761.625 - 19862.449: 81.9690% ( 48) 00:08:25.928 19862.449 - 19963.274: 82.7434% ( 56) 00:08:25.928 19963.274 - 20064.098: 83.2965% ( 40) 00:08:25.928 20064.098 - 20164.923: 84.0155% ( 52) 00:08:25.928 20164.923 - 20265.748: 84.7622% ( 54) 00:08:25.928 20265.748 - 20366.572: 85.3982% ( 46) 00:08:25.928 20366.572 - 20467.397: 86.1726% ( 56) 00:08:25.928 20467.397 - 20568.222: 86.8778% ( 51) 00:08:25.928 20568.222 - 20669.046: 87.7212% ( 61) 00:08:25.928 20669.046 - 20769.871: 88.6062% ( 64) 00:08:25.928 20769.871 - 20870.695: 89.4497% ( 61) 00:08:25.928 20870.695 - 20971.520: 90.3623% ( 66) 00:08:25.928 20971.520 - 21072.345: 91.0537% ( 50) 00:08:25.928 21072.345 - 21173.169: 91.6621% ( 44) 00:08:25.928 21173.169 - 21273.994: 92.3534% ( 50) 00:08:25.928 21273.994 - 21374.818: 92.9480% ( 43) 00:08:25.928 21374.818 - 21475.643: 93.6532% ( 51) 00:08:25.928 21475.643 - 21576.468: 94.2754% ( 45) 00:08:25.928 21576.468 - 21677.292: 94.8700% ( 43) 00:08:25.928 21677.292 - 21778.117: 95.3678% ( 36) 00:08:25.928 21778.117 - 21878.942: 95.6305% ( 19) 00:08:25.928 21878.942 - 21979.766: 95.8794% ( 18) 00:08:25.928 21979.766 - 22080.591: 96.1421% ( 19) 00:08:25.928 22080.591 - 22181.415: 96.3081% ( 12) 00:08:25.928 22181.415 - 22282.240: 96.4602% ( 11) 00:08:25.928 22282.240 - 22383.065: 96.5570% ( 7) 00:08:25.928 22383.065 - 22483.889: 96.6814% ( 9) 00:08:25.928 22483.889 - 22584.714: 96.7782% ( 7) 00:08:25.928 22584.714 - 22685.538: 96.9027% ( 9) 00:08:25.928 22685.538 - 22786.363: 97.0133% ( 8) 00:08:25.928 22786.363 - 22887.188: 97.0962% ( 6) 00:08:25.928 22887.188 - 22988.012: 97.2069% ( 8) 00:08:25.928 22988.012 - 23088.837: 97.3037% ( 7) 00:08:25.928 23088.837 - 23189.662: 97.4143% ( 8) 00:08:25.928 23189.662 - 23290.486: 97.4696% ( 4) 00:08:25.928 23290.486 - 23391.311: 97.5249% ( 4) 00:08:25.928 23391.311 - 23492.135: 97.5664% ( 3) 00:08:25.928 23492.135 - 23592.960: 97.6217% ( 4) 00:08:25.928 23592.960 - 23693.785: 97.6770% ( 4) 00:08:25.928 23693.785 - 23794.609: 97.7185% ( 3) 00:08:25.928 23794.609 - 23895.434: 97.7600% ( 3) 00:08:25.928 23895.434 - 23996.258: 97.8153% ( 4) 00:08:25.928 23996.258 - 24097.083: 97.8706% ( 4) 00:08:25.928 24097.083 - 24197.908: 97.9121% ( 3) 00:08:25.928 24197.908 - 24298.732: 97.9535% ( 3) 00:08:25.928 24298.732 - 24399.557: 98.0088% ( 4) 00:08:25.928 24399.557 - 24500.382: 98.0503% ( 3) 00:08:25.928 24500.382 - 24601.206: 98.1056% ( 4) 00:08:25.929 24601.206 - 24702.031: 98.1610% ( 4) 00:08:25.929 24702.031 - 24802.855: 98.2024% ( 3) 00:08:25.929 24802.855 - 24903.680: 98.2301% ( 2) 00:08:25.929 29239.138 - 29440.788: 98.2716% ( 3) 00:08:25.929 29440.788 - 29642.437: 98.3407% ( 5) 00:08:25.929 29642.437 - 29844.086: 98.4098% ( 5) 00:08:25.929 29844.086 - 30045.735: 98.4928% ( 6) 00:08:25.929 30045.735 - 30247.385: 98.5619% ( 5) 00:08:25.929 30247.385 - 30449.034: 98.6449% ( 6) 00:08:25.929 30449.034 - 30650.683: 98.7417% ( 7) 00:08:25.929 30650.683 - 30852.332: 98.8385% ( 7) 00:08:25.929 30852.332 - 31053.982: 98.9076% ( 5) 00:08:25.929 31053.982 - 31255.631: 99.0044% ( 7) 00:08:25.929 31255.631 - 31457.280: 99.0874% ( 6) 00:08:25.929 31457.280 - 31658.929: 99.1150% ( 2) 00:08:25.929 39724.898 - 39926.548: 99.2118% ( 7) 00:08:25.929 39926.548 - 40128.197: 99.3639% ( 11) 00:08:25.929 40128.197 - 40329.846: 99.5160% ( 11) 00:08:25.929 40329.846 - 40531.495: 99.6681% ( 11) 00:08:25.929 40531.495 - 40733.145: 99.8202% ( 11) 00:08:25.929 40733.145 - 40934.794: 99.9585% ( 10) 00:08:25.929 40934.794 - 41136.443: 100.0000% ( 3) 00:08:25.929 00:08:25.929 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:25.929 ============================================================================== 00:08:25.929 Range in us Cumulative IO count 00:08:25.929 5646.178 - 5671.385: 0.0553% ( 4) 00:08:25.929 5671.385 - 5696.591: 0.1106% ( 4) 00:08:25.929 5696.591 - 5721.797: 0.1244% ( 1) 00:08:25.929 5721.797 - 5747.003: 0.1383% ( 1) 00:08:25.929 5747.003 - 5772.209: 0.1659% ( 2) 00:08:25.929 5772.209 - 5797.415: 0.1936% ( 2) 00:08:25.929 5797.415 - 5822.622: 0.2074% ( 1) 00:08:25.929 5822.622 - 5847.828: 0.2489% ( 3) 00:08:25.929 5847.828 - 5873.034: 0.2765% ( 2) 00:08:25.929 5873.034 - 5898.240: 0.3042% ( 2) 00:08:25.929 5898.240 - 5923.446: 0.3319% ( 2) 00:08:25.929 5923.446 - 5948.652: 0.3595% ( 2) 00:08:25.929 5948.652 - 5973.858: 0.3872% ( 2) 00:08:25.929 5973.858 - 5999.065: 0.4148% ( 2) 00:08:25.929 5999.065 - 6024.271: 0.4425% ( 2) 00:08:25.929 6024.271 - 6049.477: 0.4701% ( 2) 00:08:25.929 6049.477 - 6074.683: 0.4978% ( 2) 00:08:25.929 6074.683 - 6099.889: 0.5254% ( 2) 00:08:25.929 6099.889 - 6125.095: 0.5531% ( 2) 00:08:25.929 6125.095 - 6150.302: 0.5808% ( 2) 00:08:25.929 6150.302 - 6175.508: 0.6084% ( 2) 00:08:25.929 6175.508 - 6200.714: 0.6361% ( 2) 00:08:25.929 6200.714 - 6225.920: 0.6499% ( 1) 00:08:25.929 6225.920 - 6251.126: 0.6775% ( 2) 00:08:25.929 6251.126 - 6276.332: 0.7052% ( 2) 00:08:25.929 6276.332 - 6301.538: 0.7329% ( 2) 00:08:25.929 6301.538 - 6326.745: 0.7605% ( 2) 00:08:25.929 6326.745 - 6351.951: 0.7743% ( 1) 00:08:25.929 6351.951 - 6377.157: 0.8020% ( 2) 00:08:25.929 6377.157 - 6402.363: 0.8158% ( 1) 00:08:25.929 6402.363 - 6427.569: 0.8435% ( 2) 00:08:25.929 6427.569 - 6452.775: 0.8573% ( 1) 00:08:25.929 6452.775 - 6503.188: 0.8850% ( 2) 00:08:25.929 11090.708 - 11141.120: 0.9126% ( 2) 00:08:25.929 11141.120 - 11191.532: 0.9679% ( 4) 00:08:25.929 11191.532 - 11241.945: 1.0094% ( 3) 00:08:25.929 11241.945 - 11292.357: 1.0647% ( 4) 00:08:25.929 11292.357 - 11342.769: 1.1200% ( 4) 00:08:25.929 11342.769 - 11393.182: 1.1753% ( 4) 00:08:25.929 11393.182 - 11443.594: 1.2306% ( 4) 00:08:25.929 11443.594 - 11494.006: 1.2998% ( 5) 00:08:25.929 11494.006 - 11544.418: 1.3413% ( 3) 00:08:25.929 11544.418 - 11594.831: 1.3966% ( 4) 00:08:25.929 11594.831 - 11645.243: 1.4519% ( 4) 00:08:25.929 11645.243 - 11695.655: 1.5072% ( 4) 00:08:25.929 11695.655 - 11746.068: 1.5625% ( 4) 00:08:25.929 11746.068 - 11796.480: 1.6178% ( 4) 00:08:25.929 11796.480 - 11846.892: 1.6593% ( 3) 00:08:25.929 11846.892 - 11897.305: 1.7008% ( 3) 00:08:25.929 11897.305 - 11947.717: 1.7561% ( 4) 00:08:25.929 11947.717 - 11998.129: 1.7699% ( 1) 00:08:25.929 13510.498 - 13611.323: 1.7837% ( 1) 00:08:25.929 13611.323 - 13712.148: 1.9082% ( 9) 00:08:25.929 13712.148 - 13812.972: 2.0603% ( 11) 00:08:25.929 13812.972 - 13913.797: 2.1986% ( 10) 00:08:25.929 13913.797 - 14014.622: 2.4336% ( 17) 00:08:25.929 14014.622 - 14115.446: 2.6687% ( 17) 00:08:25.929 14115.446 - 14216.271: 3.0144% ( 25) 00:08:25.929 14216.271 - 14317.095: 3.4983% ( 35) 00:08:25.929 14317.095 - 14417.920: 4.1482% ( 47) 00:08:25.929 14417.920 - 14518.745: 5.0470% ( 65) 00:08:25.929 14518.745 - 14619.569: 5.9181% ( 63) 00:08:25.929 14619.569 - 14720.394: 7.2041% ( 93) 00:08:25.929 14720.394 - 14821.218: 8.3241% ( 81) 00:08:25.929 14821.218 - 14922.043: 9.4580% ( 82) 00:08:25.929 14922.043 - 15022.868: 10.7301% ( 92) 00:08:25.929 15022.868 - 15123.692: 12.0713% ( 97) 00:08:25.929 15123.692 - 15224.517: 13.5647% ( 108) 00:08:25.929 15224.517 - 15325.342: 15.2517% ( 122) 00:08:25.929 15325.342 - 15426.166: 17.2013% ( 141) 00:08:25.929 15426.166 - 15526.991: 18.9851% ( 129) 00:08:25.929 15526.991 - 15627.815: 20.9347% ( 141) 00:08:25.929 15627.815 - 15728.640: 22.6079% ( 121) 00:08:25.929 15728.640 - 15829.465: 24.1842% ( 114) 00:08:25.929 15829.465 - 15930.289: 25.7605% ( 114) 00:08:25.929 15930.289 - 16031.114: 27.4475% ( 122) 00:08:25.929 16031.114 - 16131.938: 29.0791% ( 118) 00:08:25.929 16131.938 - 16232.763: 30.7384% ( 120) 00:08:25.929 16232.763 - 16333.588: 32.2179% ( 107) 00:08:25.929 16333.588 - 16434.412: 34.1538% ( 140) 00:08:25.929 16434.412 - 16535.237: 35.7716% ( 117) 00:08:25.929 16535.237 - 16636.062: 37.2373% ( 106) 00:08:25.929 16636.062 - 16736.886: 38.6477% ( 102) 00:08:25.929 16736.886 - 16837.711: 40.2517% ( 116) 00:08:25.929 16837.711 - 16938.535: 41.9524% ( 123) 00:08:25.929 16938.535 - 17039.360: 43.9436% ( 144) 00:08:25.929 17039.360 - 17140.185: 45.9624% ( 146) 00:08:25.929 17140.185 - 17241.009: 48.0088% ( 148) 00:08:25.929 17241.009 - 17341.834: 50.0000% ( 144) 00:08:25.929 17341.834 - 17442.658: 51.9773% ( 143) 00:08:25.929 17442.658 - 17543.483: 53.9132% ( 140) 00:08:25.929 17543.483 - 17644.308: 55.7660% ( 134) 00:08:25.929 17644.308 - 17745.132: 57.8263% ( 149) 00:08:25.929 17745.132 - 17845.957: 60.0802% ( 163) 00:08:25.929 17845.957 - 17946.782: 62.1267% ( 148) 00:08:25.929 17946.782 - 18047.606: 64.0625% ( 140) 00:08:25.929 18047.606 - 18148.431: 65.7494% ( 122) 00:08:25.929 18148.431 - 18249.255: 67.2428% ( 108) 00:08:25.929 18249.255 - 18350.080: 68.7085% ( 106) 00:08:25.929 18350.080 - 18450.905: 70.1189% ( 102) 00:08:25.929 18450.905 - 18551.729: 71.3634% ( 90) 00:08:25.929 18551.729 - 18652.554: 72.4972% ( 82) 00:08:25.929 18652.554 - 18753.378: 73.6587% ( 84) 00:08:25.929 18753.378 - 18854.203: 74.4746% ( 59) 00:08:25.929 18854.203 - 18955.028: 75.2351% ( 55) 00:08:25.929 18955.028 - 19055.852: 75.8850% ( 47) 00:08:25.929 19055.852 - 19156.677: 76.5763% ( 50) 00:08:25.929 19156.677 - 19257.502: 77.3368% ( 55) 00:08:25.929 19257.502 - 19358.326: 78.0420% ( 51) 00:08:25.929 19358.326 - 19459.151: 78.7058% ( 48) 00:08:25.929 19459.151 - 19559.975: 79.3418% ( 46) 00:08:25.929 19559.975 - 19660.800: 79.9502% ( 44) 00:08:25.929 19660.800 - 19761.625: 80.7660% ( 59) 00:08:25.929 19761.625 - 19862.449: 81.4712% ( 51) 00:08:25.930 19862.449 - 19963.274: 82.2317% ( 55) 00:08:25.930 19963.274 - 20064.098: 82.9923% ( 55) 00:08:25.930 20064.098 - 20164.923: 83.7528% ( 55) 00:08:25.930 20164.923 - 20265.748: 84.4441% ( 50) 00:08:25.930 20265.748 - 20366.572: 85.0802% ( 46) 00:08:25.930 20366.572 - 20467.397: 85.8822% ( 58) 00:08:25.930 20467.397 - 20568.222: 86.5321% ( 47) 00:08:25.930 20568.222 - 20669.046: 87.3479% ( 59) 00:08:25.930 20669.046 - 20769.871: 88.2467% ( 65) 00:08:25.930 20769.871 - 20870.695: 88.9934% ( 54) 00:08:25.930 20870.695 - 20971.520: 89.8921% ( 65) 00:08:25.930 20971.520 - 21072.345: 90.8186% ( 67) 00:08:25.930 21072.345 - 21173.169: 91.7727% ( 69) 00:08:25.930 21173.169 - 21273.994: 92.5332% ( 55) 00:08:25.930 21273.994 - 21374.818: 93.2799% ( 54) 00:08:25.930 21374.818 - 21475.643: 93.8744% ( 43) 00:08:25.930 21475.643 - 21576.468: 94.3584% ( 35) 00:08:25.930 21576.468 - 21677.292: 94.7732% ( 30) 00:08:25.930 21677.292 - 21778.117: 95.1051% ( 24) 00:08:25.930 21778.117 - 21878.942: 95.4923% ( 28) 00:08:25.930 21878.942 - 21979.766: 95.8794% ( 28) 00:08:25.930 21979.766 - 22080.591: 96.1698% ( 21) 00:08:25.930 22080.591 - 22181.415: 96.4187% ( 18) 00:08:25.930 22181.415 - 22282.240: 96.6261% ( 15) 00:08:25.930 22282.240 - 22383.065: 96.7920% ( 12) 00:08:25.930 22383.065 - 22483.889: 96.9994% ( 15) 00:08:25.930 22483.889 - 22584.714: 97.1101% ( 8) 00:08:25.930 22584.714 - 22685.538: 97.2622% ( 11) 00:08:25.930 22685.538 - 22786.363: 97.3728% ( 8) 00:08:25.930 22786.363 - 22887.188: 97.5249% ( 11) 00:08:25.930 22887.188 - 22988.012: 97.6355% ( 8) 00:08:25.930 22988.012 - 23088.837: 97.7046% ( 5) 00:08:25.930 23088.837 - 23189.662: 97.7600% ( 4) 00:08:25.930 23189.662 - 23290.486: 97.8014% ( 3) 00:08:25.930 23290.486 - 23391.311: 97.8429% ( 3) 00:08:25.930 23391.311 - 23492.135: 97.8982% ( 4) 00:08:25.930 23492.135 - 23592.960: 97.9535% ( 4) 00:08:25.930 23592.960 - 23693.785: 97.9950% ( 3) 00:08:25.930 23693.785 - 23794.609: 98.0365% ( 3) 00:08:25.930 23794.609 - 23895.434: 98.0918% ( 4) 00:08:25.930 23895.434 - 23996.258: 98.1471% ( 4) 00:08:25.930 23996.258 - 24097.083: 98.1886% ( 3) 00:08:25.930 24097.083 - 24197.908: 98.2301% ( 3) 00:08:25.930 29239.138 - 29440.788: 98.2854% ( 4) 00:08:25.930 29440.788 - 29642.437: 98.3545% ( 5) 00:08:25.930 29642.437 - 29844.086: 98.4375% ( 6) 00:08:25.930 29844.086 - 30045.735: 98.5205% ( 6) 00:08:25.930 30045.735 - 30247.385: 98.6034% ( 6) 00:08:25.930 30247.385 - 30449.034: 98.6864% ( 6) 00:08:25.930 30449.034 - 30650.683: 98.7970% ( 8) 00:08:25.930 30650.683 - 30852.332: 98.8938% ( 7) 00:08:25.930 30852.332 - 31053.982: 99.0044% ( 8) 00:08:25.930 31053.982 - 31255.631: 99.1012% ( 7) 00:08:25.930 31255.631 - 31457.280: 99.1150% ( 1) 00:08:25.930 39119.951 - 39321.600: 99.1980% ( 6) 00:08:25.930 39321.600 - 39523.249: 99.3225% ( 9) 00:08:25.930 39523.249 - 39724.898: 99.4746% ( 11) 00:08:25.930 39724.898 - 39926.548: 99.5852% ( 8) 00:08:25.930 39926.548 - 40128.197: 99.7373% ( 11) 00:08:25.930 40128.197 - 40329.846: 99.8894% ( 11) 00:08:25.930 40329.846 - 40531.495: 100.0000% ( 8) 00:08:25.930 00:08:25.930 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:25.930 ============================================================================== 00:08:25.930 Range in us Cumulative IO count 00:08:25.930 4940.406 - 4965.612: 0.0138% ( 1) 00:08:25.930 4965.612 - 4990.818: 0.0277% ( 1) 00:08:25.930 4990.818 - 5016.025: 0.0968% ( 5) 00:08:25.930 5041.231 - 5066.437: 0.1244% ( 2) 00:08:25.930 5066.437 - 5091.643: 0.1383% ( 1) 00:08:25.930 5091.643 - 5116.849: 0.1798% ( 3) 00:08:25.930 5116.849 - 5142.055: 0.2212% ( 3) 00:08:25.930 5167.262 - 5192.468: 0.2489% ( 2) 00:08:25.930 5192.468 - 5217.674: 0.2765% ( 2) 00:08:25.930 5217.674 - 5242.880: 0.3042% ( 2) 00:08:25.930 5242.880 - 5268.086: 0.3319% ( 2) 00:08:25.930 5268.086 - 5293.292: 0.3595% ( 2) 00:08:25.930 5293.292 - 5318.498: 0.3872% ( 2) 00:08:25.930 5318.498 - 5343.705: 0.4148% ( 2) 00:08:25.930 5343.705 - 5368.911: 0.4425% ( 2) 00:08:25.930 5368.911 - 5394.117: 0.4701% ( 2) 00:08:25.930 5394.117 - 5419.323: 0.4978% ( 2) 00:08:25.930 5419.323 - 5444.529: 0.5116% ( 1) 00:08:25.930 5444.529 - 5469.735: 0.5393% ( 2) 00:08:25.930 5469.735 - 5494.942: 0.5669% ( 2) 00:08:25.930 5494.942 - 5520.148: 0.5946% ( 2) 00:08:25.930 5520.148 - 5545.354: 0.6222% ( 2) 00:08:25.930 5545.354 - 5570.560: 0.6499% ( 2) 00:08:25.930 5570.560 - 5595.766: 0.6775% ( 2) 00:08:25.930 5595.766 - 5620.972: 0.7052% ( 2) 00:08:25.930 5620.972 - 5646.178: 0.7190% ( 1) 00:08:25.930 5646.178 - 5671.385: 0.7467% ( 2) 00:08:25.930 5671.385 - 5696.591: 0.7743% ( 2) 00:08:25.930 5696.591 - 5721.797: 0.8020% ( 2) 00:08:25.930 5721.797 - 5747.003: 0.8296% ( 2) 00:08:25.930 5747.003 - 5772.209: 0.8573% ( 2) 00:08:25.930 5772.209 - 5797.415: 0.8711% ( 1) 00:08:25.930 5797.415 - 5822.622: 0.8850% ( 1) 00:08:25.930 10183.286 - 10233.698: 0.9126% ( 2) 00:08:25.930 10233.698 - 10284.111: 0.9679% ( 4) 00:08:25.930 10284.111 - 10334.523: 1.0232% ( 4) 00:08:25.930 10334.523 - 10384.935: 1.0785% ( 4) 00:08:25.930 10384.935 - 10435.348: 1.1338% ( 4) 00:08:25.930 10435.348 - 10485.760: 1.1892% ( 4) 00:08:25.930 10485.760 - 10536.172: 1.2445% ( 4) 00:08:25.930 10536.172 - 10586.585: 1.2998% ( 4) 00:08:25.930 10586.585 - 10636.997: 1.3551% ( 4) 00:08:25.930 10636.997 - 10687.409: 1.4104% ( 4) 00:08:25.930 10687.409 - 10737.822: 1.4657% ( 4) 00:08:25.930 10737.822 - 10788.234: 1.5210% ( 4) 00:08:25.930 10788.234 - 10838.646: 1.5763% ( 4) 00:08:25.930 10838.646 - 10889.058: 1.6316% ( 4) 00:08:25.930 10889.058 - 10939.471: 1.6869% ( 4) 00:08:25.930 10939.471 - 10989.883: 1.7146% ( 2) 00:08:25.930 10989.883 - 11040.295: 1.7561% ( 3) 00:08:25.930 11040.295 - 11090.708: 1.7699% ( 1) 00:08:25.930 13712.148 - 13812.972: 1.7837% ( 1) 00:08:25.930 13812.972 - 13913.797: 2.0326% ( 18) 00:08:25.930 13913.797 - 14014.622: 2.3645% ( 24) 00:08:25.930 14014.622 - 14115.446: 2.8070% ( 32) 00:08:25.930 14115.446 - 14216.271: 3.4845% ( 49) 00:08:25.930 14216.271 - 14317.095: 4.0376% ( 40) 00:08:25.930 14317.095 - 14417.920: 4.6875% ( 47) 00:08:25.930 14417.920 - 14518.745: 5.3927% ( 51) 00:08:25.930 14518.745 - 14619.569: 6.2085% ( 59) 00:08:25.930 14619.569 - 14720.394: 7.2594% ( 76) 00:08:25.930 14720.394 - 14821.218: 8.1582% ( 65) 00:08:25.930 14821.218 - 14922.043: 9.3473% ( 86) 00:08:25.930 14922.043 - 15022.868: 10.7716% ( 103) 00:08:25.930 15022.868 - 15123.692: 12.4032% ( 118) 00:08:25.930 15123.692 - 15224.517: 13.8274% ( 103) 00:08:25.930 15224.517 - 15325.342: 15.4591% ( 118) 00:08:25.930 15325.342 - 15426.166: 17.1875% ( 125) 00:08:25.931 15426.166 - 15526.991: 18.9851% ( 130) 00:08:25.931 15526.991 - 15627.815: 20.6858% ( 123) 00:08:25.931 15627.815 - 15728.640: 22.4281% ( 126) 00:08:25.931 15728.640 - 15829.465: 24.3086% ( 136) 00:08:25.931 15829.465 - 15930.289: 25.9264% ( 117) 00:08:25.931 15930.289 - 16031.114: 27.7102% ( 129) 00:08:25.931 16031.114 - 16131.938: 29.5354% ( 132) 00:08:25.931 16131.938 - 16232.763: 31.3468% ( 131) 00:08:25.931 16232.763 - 16333.588: 33.0199% ( 121) 00:08:25.931 16333.588 - 16434.412: 34.6654% ( 119) 00:08:25.931 16434.412 - 16535.237: 36.1726% ( 109) 00:08:25.931 16535.237 - 16636.062: 37.6106% ( 104) 00:08:25.931 16636.062 - 16736.886: 39.1316% ( 110) 00:08:25.931 16736.886 - 16837.711: 40.6665% ( 111) 00:08:25.931 16837.711 - 16938.535: 42.1184% ( 105) 00:08:25.931 16938.535 - 17039.360: 43.7223% ( 116) 00:08:25.931 17039.360 - 17140.185: 45.2434% ( 110) 00:08:25.931 17140.185 - 17241.009: 46.8888% ( 119) 00:08:25.931 17241.009 - 17341.834: 48.6173% ( 125) 00:08:25.931 17341.834 - 17442.658: 50.5946% ( 143) 00:08:25.931 17442.658 - 17543.483: 52.6687% ( 150) 00:08:25.931 17543.483 - 17644.308: 54.6737% ( 145) 00:08:25.931 17644.308 - 17745.132: 56.6233% ( 141) 00:08:25.931 17745.132 - 17845.957: 58.8772% ( 163) 00:08:25.931 17845.957 - 17946.782: 60.7992% ( 139) 00:08:25.931 17946.782 - 18047.606: 62.6106% ( 131) 00:08:25.931 18047.606 - 18148.431: 64.5050% ( 137) 00:08:25.931 18148.431 - 18249.255: 66.5653% ( 149) 00:08:25.931 18249.255 - 18350.080: 68.3352% ( 128) 00:08:25.931 18350.080 - 18450.905: 69.9668% ( 118) 00:08:25.931 18450.905 - 18551.729: 71.3357% ( 99) 00:08:25.931 18551.729 - 18652.554: 72.5525% ( 88) 00:08:25.931 18652.554 - 18753.378: 73.6449% ( 79) 00:08:25.931 18753.378 - 18854.203: 74.5852% ( 68) 00:08:25.931 18854.203 - 18955.028: 75.4287% ( 61) 00:08:25.931 18955.028 - 19055.852: 76.1200% ( 50) 00:08:25.931 19055.852 - 19156.677: 76.8944% ( 56) 00:08:25.931 19156.677 - 19257.502: 77.7931% ( 65) 00:08:25.931 19257.502 - 19358.326: 78.4845% ( 50) 00:08:25.931 19358.326 - 19459.151: 79.1482% ( 48) 00:08:25.931 19459.151 - 19559.975: 79.9087% ( 55) 00:08:25.931 19559.975 - 19660.800: 80.6416% ( 53) 00:08:25.931 19660.800 - 19761.625: 81.3744% ( 53) 00:08:25.931 19761.625 - 19862.449: 82.1764% ( 58) 00:08:25.931 19862.449 - 19963.274: 82.9646% ( 57) 00:08:25.931 19963.274 - 20064.098: 83.7113% ( 54) 00:08:25.931 20064.098 - 20164.923: 84.4303% ( 52) 00:08:25.931 20164.923 - 20265.748: 85.1493% ( 52) 00:08:25.931 20265.748 - 20366.572: 85.7577% ( 44) 00:08:25.931 20366.572 - 20467.397: 86.3523% ( 43) 00:08:25.931 20467.397 - 20568.222: 86.8778% ( 38) 00:08:25.931 20568.222 - 20669.046: 87.5691% ( 50) 00:08:25.931 20669.046 - 20769.871: 88.2329% ( 48) 00:08:25.931 20769.871 - 20870.695: 89.0348% ( 58) 00:08:25.931 20870.695 - 20971.520: 89.8507% ( 59) 00:08:25.931 20971.520 - 21072.345: 90.7494% ( 65) 00:08:25.931 21072.345 - 21173.169: 91.5653% ( 59) 00:08:25.931 21173.169 - 21273.994: 92.3119% ( 54) 00:08:25.931 21273.994 - 21374.818: 93.0171% ( 51) 00:08:25.931 21374.818 - 21475.643: 93.6532% ( 46) 00:08:25.931 21475.643 - 21576.468: 94.3308% ( 49) 00:08:25.931 21576.468 - 21677.292: 94.9253% ( 43) 00:08:25.931 21677.292 - 21778.117: 95.3678% ( 32) 00:08:25.931 21778.117 - 21878.942: 95.8241% ( 33) 00:08:25.931 21878.942 - 21979.766: 96.2804% ( 33) 00:08:25.931 21979.766 - 22080.591: 96.5846% ( 22) 00:08:25.931 22080.591 - 22181.415: 96.8335% ( 18) 00:08:25.931 22181.415 - 22282.240: 97.0409% ( 15) 00:08:25.931 22282.240 - 22383.065: 97.1792% ( 10) 00:08:25.931 22383.065 - 22483.889: 97.2760% ( 7) 00:08:25.931 22483.889 - 22584.714: 97.3866% ( 8) 00:08:25.931 22584.714 - 22685.538: 97.4972% ( 8) 00:08:25.931 22685.538 - 22786.363: 97.5940% ( 7) 00:08:25.931 22786.363 - 22887.188: 97.6908% ( 7) 00:08:25.931 22887.188 - 22988.012: 97.8014% ( 8) 00:08:25.931 22988.012 - 23088.837: 97.9259% ( 9) 00:08:25.931 23088.837 - 23189.662: 98.0227% ( 7) 00:08:25.931 23189.662 - 23290.486: 98.1333% ( 8) 00:08:25.931 23290.486 - 23391.311: 98.1886% ( 4) 00:08:25.931 23391.311 - 23492.135: 98.2301% ( 3) 00:08:25.931 29642.437 - 29844.086: 98.2577% ( 2) 00:08:25.931 29844.086 - 30045.735: 98.3407% ( 6) 00:08:25.931 30045.735 - 30247.385: 98.4237% ( 6) 00:08:25.931 30247.385 - 30449.034: 98.5066% ( 6) 00:08:25.931 30449.034 - 30650.683: 98.6587% ( 11) 00:08:25.931 30650.683 - 30852.332: 98.7832% ( 9) 00:08:25.931 30852.332 - 31053.982: 98.9076% ( 9) 00:08:25.931 31053.982 - 31255.631: 98.9491% ( 3) 00:08:25.931 31255.631 - 31457.280: 98.9629% ( 1) 00:08:25.931 31457.280 - 31658.929: 99.0459% ( 6) 00:08:25.931 31658.929 - 31860.578: 99.1150% ( 5) 00:08:25.931 38313.354 - 38515.003: 99.1289% ( 1) 00:08:25.931 38515.003 - 38716.652: 99.1704% ( 3) 00:08:25.931 38918.302 - 39119.951: 99.2118% ( 3) 00:08:25.931 39119.951 - 39321.600: 99.3778% ( 12) 00:08:25.931 39321.600 - 39523.249: 99.5160% ( 10) 00:08:25.931 39523.249 - 39724.898: 99.6405% ( 9) 00:08:25.931 39724.898 - 39926.548: 99.7788% ( 10) 00:08:25.931 39926.548 - 40128.197: 99.9170% ( 10) 00:08:25.931 40128.197 - 40329.846: 100.0000% ( 6) 00:08:25.931 00:08:25.931 13:21:21 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:27.318 Initializing NVMe Controllers 00:08:27.318 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.318 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.318 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.319 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.319 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:27.319 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:27.319 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:27.319 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:27.319 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:27.319 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:27.319 Initialization complete. Launching workers. 00:08:27.319 ======================================================== 00:08:27.319 Latency(us) 00:08:27.319 Device Information : IOPS MiB/s Average min max 00:08:27.319 PCIE (0000:00:10.0) NSID 1 from core 0: 7679.85 90.00 16685.62 11828.23 37900.29 00:08:27.319 PCIE (0000:00:11.0) NSID 1 from core 0: 7679.85 90.00 16677.19 10809.83 38545.42 00:08:27.319 PCIE (0000:00:13.0) NSID 1 from core 0: 7679.85 90.00 16660.36 9047.07 39994.42 00:08:27.319 PCIE (0000:00:12.0) NSID 1 from core 0: 7679.85 90.00 16643.80 7947.25 39591.11 00:08:27.319 PCIE (0000:00:12.0) NSID 2 from core 0: 7679.85 90.00 16624.54 6829.05 39419.47 00:08:27.319 PCIE (0000:00:12.0) NSID 3 from core 0: 7743.32 90.74 16471.26 6044.18 29281.97 00:08:27.319 ======================================================== 00:08:27.319 Total : 46142.57 540.73 16626.91 6044.18 39994.42 00:08:27.319 00:08:27.319 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:27.319 ================================================================================= 00:08:27.319 1.00000% : 12603.077us 00:08:27.319 10.00000% : 13913.797us 00:08:27.319 25.00000% : 15022.868us 00:08:27.319 50.00000% : 16232.763us 00:08:27.319 75.00000% : 18047.606us 00:08:27.319 90.00000% : 19257.502us 00:08:27.319 95.00000% : 20669.046us 00:08:27.319 98.00000% : 23088.837us 00:08:27.319 99.00000% : 28230.892us 00:08:27.319 99.50000% : 37103.458us 00:08:27.319 99.90000% : 37708.406us 00:08:27.319 99.99000% : 37910.055us 00:08:27.319 99.99900% : 37910.055us 00:08:27.319 99.99990% : 37910.055us 00:08:27.319 99.99999% : 37910.055us 00:08:27.319 00:08:27.319 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:27.319 ================================================================================= 00:08:27.319 1.00000% : 12603.077us 00:08:27.319 10.00000% : 14014.622us 00:08:27.319 25.00000% : 15123.692us 00:08:27.319 50.00000% : 16232.763us 00:08:27.319 75.00000% : 18047.606us 00:08:27.319 90.00000% : 19055.852us 00:08:27.319 95.00000% : 21072.345us 00:08:27.319 98.00000% : 22584.714us 00:08:27.319 99.00000% : 28029.243us 00:08:27.319 99.50000% : 37506.757us 00:08:27.319 99.90000% : 38515.003us 00:08:27.319 99.99000% : 38716.652us 00:08:27.319 99.99900% : 38716.652us 00:08:27.319 99.99990% : 38716.652us 00:08:27.319 99.99999% : 38716.652us 00:08:27.319 00:08:27.319 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:27.319 ================================================================================= 00:08:27.319 1.00000% : 12502.252us 00:08:27.319 10.00000% : 13913.797us 00:08:27.319 25.00000% : 15123.692us 00:08:27.319 50.00000% : 16232.763us 00:08:27.319 75.00000% : 17845.957us 00:08:27.319 90.00000% : 19055.852us 00:08:27.319 95.00000% : 20971.520us 00:08:27.319 98.00000% : 22282.240us 00:08:27.319 99.00000% : 29440.788us 00:08:27.319 99.50000% : 38918.302us 00:08:27.319 99.90000% : 39926.548us 00:08:27.319 99.99000% : 40128.197us 00:08:27.319 99.99900% : 40128.197us 00:08:27.319 99.99990% : 40128.197us 00:08:27.319 99.99999% : 40128.197us 00:08:27.319 00:08:27.319 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:27.319 ================================================================================= 00:08:27.319 1.00000% : 12250.191us 00:08:27.319 10.00000% : 13913.797us 00:08:27.319 25.00000% : 15022.868us 00:08:27.319 50.00000% : 16333.588us 00:08:27.319 75.00000% : 17946.782us 00:08:27.319 90.00000% : 18955.028us 00:08:27.319 95.00000% : 20870.695us 00:08:27.319 98.00000% : 22181.415us 00:08:27.319 99.00000% : 28634.191us 00:08:27.319 99.50000% : 38918.302us 00:08:27.319 99.90000% : 39523.249us 00:08:27.319 99.99000% : 39724.898us 00:08:27.319 99.99900% : 39724.898us 00:08:27.319 99.99990% : 39724.898us 00:08:27.319 99.99999% : 39724.898us 00:08:27.319 00:08:27.319 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:27.319 ================================================================================= 00:08:27.319 1.00000% : 12754.314us 00:08:27.319 10.00000% : 13812.972us 00:08:27.319 25.00000% : 15022.868us 00:08:27.319 50.00000% : 16333.588us 00:08:27.319 75.00000% : 18047.606us 00:08:27.319 90.00000% : 19055.852us 00:08:27.319 95.00000% : 20669.046us 00:08:27.319 98.00000% : 22383.065us 00:08:27.319 99.00000% : 28432.542us 00:08:27.319 99.50000% : 38716.652us 00:08:27.319 99.90000% : 39321.600us 00:08:27.319 99.99000% : 39523.249us 00:08:27.319 99.99900% : 39523.249us 00:08:27.319 99.99990% : 39523.249us 00:08:27.319 99.99999% : 39523.249us 00:08:27.319 00:08:27.319 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:27.319 ================================================================================= 00:08:27.319 1.00000% : 12552.665us 00:08:27.319 10.00000% : 13812.972us 00:08:27.319 25.00000% : 15022.868us 00:08:27.319 50.00000% : 16232.763us 00:08:27.319 75.00000% : 17946.782us 00:08:27.319 90.00000% : 19257.502us 00:08:27.319 95.00000% : 20467.397us 00:08:27.319 98.00000% : 21878.942us 00:08:27.319 99.00000% : 22584.714us 00:08:27.319 99.50000% : 28634.191us 00:08:27.319 99.90000% : 29239.138us 00:08:27.319 99.99000% : 29440.788us 00:08:27.319 99.99900% : 29440.788us 00:08:27.319 99.99990% : 29440.788us 00:08:27.319 99.99999% : 29440.788us 00:08:27.319 00:08:27.319 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:27.319 ============================================================================== 00:08:27.319 Range in us Cumulative IO count 00:08:27.319 11796.480 - 11846.892: 0.0129% ( 1) 00:08:27.319 11846.892 - 11897.305: 0.0517% ( 3) 00:08:27.319 11897.305 - 11947.717: 0.1420% ( 7) 00:08:27.319 11998.129 - 12048.542: 0.1679% ( 2) 00:08:27.319 12048.542 - 12098.954: 0.1808% ( 1) 00:08:27.319 12098.954 - 12149.366: 0.2583% ( 6) 00:08:27.319 12149.366 - 12199.778: 0.3357% ( 6) 00:08:27.319 12199.778 - 12250.191: 0.4261% ( 7) 00:08:27.319 12250.191 - 12300.603: 0.5424% ( 9) 00:08:27.319 12300.603 - 12351.015: 0.6457% ( 8) 00:08:27.319 12351.015 - 12401.428: 0.7231% ( 6) 00:08:27.319 12401.428 - 12451.840: 0.8006% ( 6) 00:08:27.319 12451.840 - 12502.252: 0.9039% ( 8) 00:08:27.319 12502.252 - 12552.665: 0.9556% ( 4) 00:08:27.319 12552.665 - 12603.077: 1.0331% ( 6) 00:08:27.319 12603.077 - 12653.489: 1.2784% ( 19) 00:08:27.319 12653.489 - 12703.902: 1.5496% ( 21) 00:08:27.319 12703.902 - 12754.314: 1.8208% ( 21) 00:08:27.319 12754.314 - 12804.726: 2.1049% ( 22) 00:08:27.319 12804.726 - 12855.138: 2.2856% ( 14) 00:08:27.319 12855.138 - 12905.551: 2.5826% ( 23) 00:08:27.319 12905.551 - 13006.375: 3.5640% ( 76) 00:08:27.319 13006.375 - 13107.200: 4.2097% ( 50) 00:08:27.319 13107.200 - 13208.025: 4.6875% ( 37) 00:08:27.319 13208.025 - 13308.849: 5.2686% ( 45) 00:08:27.319 13308.849 - 13409.674: 5.8884% ( 48) 00:08:27.319 13409.674 - 13510.498: 6.7794% ( 69) 00:08:27.319 13510.498 - 13611.323: 7.5155% ( 57) 00:08:27.319 13611.323 - 13712.148: 8.4969% ( 76) 00:08:27.319 13712.148 - 13812.972: 9.5558% ( 82) 00:08:27.319 13812.972 - 13913.797: 10.8858% ( 103) 00:08:27.319 13913.797 - 14014.622: 11.7381% ( 66) 00:08:27.319 14014.622 - 14115.446: 12.6808% ( 73) 00:08:27.319 14115.446 - 14216.271: 13.4943% ( 63) 00:08:27.319 14216.271 - 14317.095: 14.5661% ( 83) 00:08:27.319 14317.095 - 14417.920: 15.5863% ( 79) 00:08:27.319 14417.920 - 14518.745: 16.9163% ( 103) 00:08:27.319 14518.745 - 14619.569: 18.3884% ( 114) 00:08:27.319 14619.569 - 14720.394: 20.0026% ( 125) 00:08:27.319 14720.394 - 14821.218: 21.5909% ( 123) 00:08:27.319 14821.218 - 14922.043: 23.0759% ( 115) 00:08:27.319 14922.043 - 15022.868: 25.0517% ( 153) 00:08:27.319 15022.868 - 15123.692: 27.4664% ( 187) 00:08:27.319 15123.692 - 15224.517: 29.4163% ( 151) 00:08:27.319 15224.517 - 15325.342: 31.7665% ( 182) 00:08:27.319 15325.342 - 15426.166: 33.8972% ( 165) 00:08:27.319 15426.166 - 15526.991: 35.7696% ( 145) 00:08:27.319 15526.991 - 15627.815: 37.7195% ( 151) 00:08:27.319 15627.815 - 15728.640: 40.4571% ( 212) 00:08:27.319 15728.640 - 15829.465: 42.5491% ( 162) 00:08:27.319 15829.465 - 15930.289: 44.8218% ( 176) 00:08:27.319 15930.289 - 16031.114: 46.9783% ( 167) 00:08:27.319 16031.114 - 16131.938: 49.0573% ( 161) 00:08:27.319 16131.938 - 16232.763: 51.0072% ( 151) 00:08:27.319 16232.763 - 16333.588: 52.6085% ( 124) 00:08:27.319 16333.588 - 16434.412: 54.4034% ( 139) 00:08:27.319 16434.412 - 16535.237: 55.6560% ( 97) 00:08:27.319 16535.237 - 16636.062: 57.1281% ( 114) 00:08:27.319 16636.062 - 16736.886: 58.8197% ( 131) 00:08:27.319 16736.886 - 16837.711: 60.4468% ( 126) 00:08:27.319 16837.711 - 16938.535: 61.8673% ( 110) 00:08:27.319 16938.535 - 17039.360: 63.4298% ( 121) 00:08:27.319 17039.360 - 17140.185: 64.7340% ( 101) 00:08:27.319 17140.185 - 17241.009: 66.2707% ( 119) 00:08:27.319 17241.009 - 17341.834: 67.5232% ( 97) 00:08:27.319 17341.834 - 17442.658: 68.5821% ( 82) 00:08:27.319 17442.658 - 17543.483: 69.6668% ( 84) 00:08:27.319 17543.483 - 17644.308: 70.9840% ( 102) 00:08:27.320 17644.308 - 17745.132: 72.1978% ( 94) 00:08:27.320 17745.132 - 17845.957: 73.2955% ( 85) 00:08:27.320 17845.957 - 17946.782: 74.9613% ( 129) 00:08:27.320 17946.782 - 18047.606: 76.3559% ( 108) 00:08:27.320 18047.606 - 18148.431: 77.9830% ( 126) 00:08:27.320 18148.431 - 18249.255: 79.4551% ( 114) 00:08:27.320 18249.255 - 18350.080: 80.6043% ( 89) 00:08:27.320 18350.080 - 18450.905: 81.8311% ( 95) 00:08:27.320 18450.905 - 18551.729: 83.0708% ( 96) 00:08:27.320 18551.729 - 18652.554: 84.6591% ( 123) 00:08:27.320 18652.554 - 18753.378: 85.8471% ( 92) 00:08:27.320 18753.378 - 18854.203: 87.0610% ( 94) 00:08:27.320 18854.203 - 18955.028: 87.9390% ( 68) 00:08:27.320 18955.028 - 19055.852: 88.9334% ( 77) 00:08:27.320 19055.852 - 19156.677: 89.6952% ( 59) 00:08:27.320 19156.677 - 19257.502: 90.3667% ( 52) 00:08:27.320 19257.502 - 19358.326: 91.0382% ( 52) 00:08:27.320 19358.326 - 19459.151: 91.5289% ( 38) 00:08:27.320 19459.151 - 19559.975: 91.9034% ( 29) 00:08:27.320 19559.975 - 19660.800: 92.2650% ( 28) 00:08:27.320 19660.800 - 19761.625: 92.5749% ( 24) 00:08:27.320 19761.625 - 19862.449: 92.9494% ( 29) 00:08:27.320 19862.449 - 19963.274: 93.4143% ( 36) 00:08:27.320 19963.274 - 20064.098: 93.7113% ( 23) 00:08:27.320 20064.098 - 20164.923: 93.9050% ( 15) 00:08:27.320 20164.923 - 20265.748: 94.1116% ( 16) 00:08:27.320 20265.748 - 20366.572: 94.4086% ( 23) 00:08:27.320 20366.572 - 20467.397: 94.6023% ( 15) 00:08:27.320 20467.397 - 20568.222: 94.8993% ( 23) 00:08:27.320 20568.222 - 20669.046: 95.1059% ( 16) 00:08:27.320 20669.046 - 20769.871: 95.2092% ( 8) 00:08:27.320 20769.871 - 20870.695: 95.3512% ( 11) 00:08:27.320 20870.695 - 20971.520: 95.5966% ( 19) 00:08:27.320 20971.520 - 21072.345: 95.6482% ( 4) 00:08:27.320 21072.345 - 21173.169: 95.6870% ( 3) 00:08:27.320 21173.169 - 21273.994: 95.8807% ( 15) 00:08:27.320 21273.994 - 21374.818: 96.0098% ( 10) 00:08:27.320 21374.818 - 21475.643: 96.0744% ( 5) 00:08:27.320 21475.643 - 21576.468: 96.1260% ( 4) 00:08:27.320 21576.468 - 21677.292: 96.2681% ( 11) 00:08:27.320 21677.292 - 21778.117: 96.4230% ( 12) 00:08:27.320 21778.117 - 21878.942: 96.5780% ( 12) 00:08:27.320 21878.942 - 21979.766: 96.7200% ( 11) 00:08:27.320 21979.766 - 22080.591: 96.8104% ( 7) 00:08:27.320 22080.591 - 22181.415: 97.1333% ( 25) 00:08:27.320 22181.415 - 22282.240: 97.3528% ( 17) 00:08:27.320 22282.240 - 22383.065: 97.4819% ( 10) 00:08:27.320 22383.065 - 22483.889: 97.5465% ( 5) 00:08:27.320 22483.889 - 22584.714: 97.6627% ( 9) 00:08:27.320 22584.714 - 22685.538: 97.7014% ( 3) 00:08:27.320 22685.538 - 22786.363: 97.7402% ( 3) 00:08:27.320 22786.363 - 22887.188: 97.8177% ( 6) 00:08:27.320 22887.188 - 22988.012: 97.9468% ( 10) 00:08:27.320 22988.012 - 23088.837: 98.0114% ( 5) 00:08:27.320 23088.837 - 23189.662: 98.0888% ( 6) 00:08:27.320 23189.662 - 23290.486: 98.1663% ( 6) 00:08:27.320 23290.486 - 23391.311: 98.1921% ( 2) 00:08:27.320 23391.311 - 23492.135: 98.2438% ( 4) 00:08:27.320 23492.135 - 23592.960: 98.2825% ( 3) 00:08:27.320 23592.960 - 23693.785: 98.3342% ( 4) 00:08:27.320 23693.785 - 23794.609: 98.3471% ( 1) 00:08:27.320 27222.646 - 27424.295: 98.4633% ( 9) 00:08:27.320 27424.295 - 27625.945: 98.7216% ( 20) 00:08:27.320 27625.945 - 27827.594: 98.7603% ( 3) 00:08:27.320 27827.594 - 28029.243: 98.8895% ( 10) 00:08:27.320 28029.243 - 28230.892: 99.0057% ( 9) 00:08:27.320 28230.892 - 28432.542: 99.1477% ( 11) 00:08:27.320 28432.542 - 28634.191: 99.1736% ( 2) 00:08:27.320 36296.862 - 36498.511: 99.1865% ( 1) 00:08:27.320 36498.511 - 36700.160: 99.3027% ( 9) 00:08:27.320 36700.160 - 36901.809: 99.4318% ( 10) 00:08:27.320 36901.809 - 37103.458: 99.5480% ( 9) 00:08:27.320 37103.458 - 37305.108: 99.6772% ( 10) 00:08:27.320 37305.108 - 37506.757: 99.7546% ( 6) 00:08:27.320 37506.757 - 37708.406: 99.9096% ( 12) 00:08:27.320 37708.406 - 37910.055: 100.0000% ( 7) 00:08:27.320 00:08:27.320 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:27.320 ============================================================================== 00:08:27.320 Range in us Cumulative IO count 00:08:27.320 10788.234 - 10838.646: 0.0258% ( 2) 00:08:27.320 10838.646 - 10889.058: 0.0646% ( 3) 00:08:27.320 10889.058 - 10939.471: 0.1291% ( 5) 00:08:27.320 10939.471 - 10989.883: 0.2066% ( 6) 00:08:27.320 10989.883 - 11040.295: 0.2970% ( 7) 00:08:27.320 11040.295 - 11090.708: 0.3874% ( 7) 00:08:27.320 11090.708 - 11141.120: 0.5036% ( 9) 00:08:27.320 11141.120 - 11191.532: 0.5553% ( 4) 00:08:27.320 11191.532 - 11241.945: 0.5940% ( 3) 00:08:27.320 11241.945 - 11292.357: 0.6327% ( 3) 00:08:27.320 11292.357 - 11342.769: 0.6586% ( 2) 00:08:27.320 11342.769 - 11393.182: 0.6973% ( 3) 00:08:27.320 11393.182 - 11443.594: 0.7231% ( 2) 00:08:27.320 11443.594 - 11494.006: 0.7619% ( 3) 00:08:27.320 11494.006 - 11544.418: 0.7877% ( 2) 00:08:27.320 11544.418 - 11594.831: 0.8264% ( 3) 00:08:27.320 12401.428 - 12451.840: 0.8394% ( 1) 00:08:27.320 12451.840 - 12502.252: 0.9039% ( 5) 00:08:27.320 12502.252 - 12552.665: 0.9685% ( 5) 00:08:27.320 12552.665 - 12603.077: 1.0589% ( 7) 00:08:27.320 12603.077 - 12653.489: 1.1235% ( 5) 00:08:27.320 12653.489 - 12703.902: 1.2655% ( 11) 00:08:27.320 12703.902 - 12754.314: 1.3688% ( 8) 00:08:27.320 12754.314 - 12804.726: 1.5238% ( 12) 00:08:27.320 12804.726 - 12855.138: 1.7045% ( 14) 00:08:27.320 12855.138 - 12905.551: 1.8853% ( 14) 00:08:27.320 12905.551 - 13006.375: 2.2856% ( 31) 00:08:27.320 13006.375 - 13107.200: 2.9959% ( 55) 00:08:27.320 13107.200 - 13208.025: 3.8094% ( 63) 00:08:27.320 13208.025 - 13308.849: 4.7004% ( 69) 00:08:27.320 13308.849 - 13409.674: 5.5527% ( 66) 00:08:27.320 13409.674 - 13510.498: 6.6503% ( 85) 00:08:27.320 13510.498 - 13611.323: 7.4638% ( 63) 00:08:27.320 13611.323 - 13712.148: 8.1353% ( 52) 00:08:27.320 13712.148 - 13812.972: 8.7293% ( 46) 00:08:27.320 13812.972 - 13913.797: 9.4525% ( 56) 00:08:27.320 13913.797 - 14014.622: 10.6663% ( 94) 00:08:27.320 14014.622 - 14115.446: 12.2805% ( 125) 00:08:27.320 14115.446 - 14216.271: 14.5532% ( 176) 00:08:27.320 14216.271 - 14317.095: 16.1028% ( 120) 00:08:27.320 14317.095 - 14417.920: 17.2521% ( 89) 00:08:27.320 14417.920 - 14518.745: 18.4659% ( 94) 00:08:27.320 14518.745 - 14619.569: 19.5894% ( 87) 00:08:27.320 14619.569 - 14720.394: 20.7128% ( 87) 00:08:27.320 14720.394 - 14821.218: 21.7459% ( 80) 00:08:27.320 14821.218 - 14922.043: 22.8048% ( 82) 00:08:27.320 14922.043 - 15022.868: 24.0444% ( 96) 00:08:27.320 15022.868 - 15123.692: 25.8006% ( 136) 00:08:27.320 15123.692 - 15224.517: 27.3115% ( 117) 00:08:27.320 15224.517 - 15325.342: 29.2485% ( 150) 00:08:27.320 15325.342 - 15426.166: 31.4566% ( 171) 00:08:27.320 15426.166 - 15526.991: 33.6002% ( 166) 00:08:27.320 15526.991 - 15627.815: 35.9246% ( 180) 00:08:27.320 15627.815 - 15728.640: 38.5201% ( 201) 00:08:27.320 15728.640 - 15829.465: 41.0124% ( 193) 00:08:27.320 15829.465 - 15930.289: 43.8275% ( 218) 00:08:27.320 15930.289 - 16031.114: 46.4101% ( 200) 00:08:27.320 16031.114 - 16131.938: 48.6699% ( 175) 00:08:27.320 16131.938 - 16232.763: 50.9039% ( 173) 00:08:27.320 16232.763 - 16333.588: 53.0992% ( 170) 00:08:27.320 16333.588 - 16434.412: 55.1653% ( 160) 00:08:27.320 16434.412 - 16535.237: 57.1152% ( 151) 00:08:27.320 16535.237 - 16636.062: 59.1167% ( 155) 00:08:27.320 16636.062 - 16736.886: 60.4855% ( 106) 00:08:27.320 16736.886 - 16837.711: 61.5961% ( 86) 00:08:27.320 16837.711 - 16938.535: 62.7970% ( 93) 00:08:27.320 16938.535 - 17039.360: 63.7655% ( 75) 00:08:27.320 17039.360 - 17140.185: 64.6823% ( 71) 00:08:27.320 17140.185 - 17241.009: 65.5217% ( 65) 00:08:27.320 17241.009 - 17341.834: 66.7485% ( 95) 00:08:27.320 17341.834 - 17442.658: 67.7815% ( 80) 00:08:27.320 17442.658 - 17543.483: 69.0987% ( 102) 00:08:27.320 17543.483 - 17644.308: 70.2221% ( 87) 00:08:27.320 17644.308 - 17745.132: 71.3843% ( 90) 00:08:27.320 17745.132 - 17845.957: 72.6369% ( 97) 00:08:27.320 17845.957 - 17946.782: 74.3414% ( 132) 00:08:27.320 17946.782 - 18047.606: 76.1751% ( 142) 00:08:27.320 18047.606 - 18148.431: 78.0992% ( 149) 00:08:27.320 18148.431 - 18249.255: 80.2299% ( 165) 00:08:27.320 18249.255 - 18350.080: 82.4122% ( 169) 00:08:27.320 18350.080 - 18450.905: 84.2846% ( 145) 00:08:27.320 18450.905 - 18551.729: 85.6921% ( 109) 00:08:27.320 18551.729 - 18652.554: 87.0997% ( 109) 00:08:27.320 18652.554 - 18753.378: 87.8874% ( 61) 00:08:27.320 18753.378 - 18854.203: 88.5976% ( 55) 00:08:27.320 18854.203 - 18955.028: 89.2304% ( 49) 00:08:27.320 18955.028 - 19055.852: 90.0181% ( 61) 00:08:27.320 19055.852 - 19156.677: 90.5992% ( 45) 00:08:27.320 19156.677 - 19257.502: 91.1157% ( 40) 00:08:27.320 19257.502 - 19358.326: 91.8001% ( 53) 00:08:27.320 19358.326 - 19459.151: 92.2004% ( 31) 00:08:27.320 19459.151 - 19559.975: 92.5103% ( 24) 00:08:27.320 19559.975 - 19660.800: 92.7815% ( 21) 00:08:27.320 19660.800 - 19761.625: 92.9365% ( 12) 00:08:27.320 19761.625 - 19862.449: 93.0656% ( 10) 00:08:27.320 19862.449 - 19963.274: 93.2335% ( 13) 00:08:27.320 19963.274 - 20064.098: 93.3884% ( 12) 00:08:27.320 20064.098 - 20164.923: 93.5434% ( 12) 00:08:27.320 20164.923 - 20265.748: 93.7242% ( 14) 00:08:27.320 20265.748 - 20366.572: 93.8791% ( 12) 00:08:27.320 20366.572 - 20467.397: 94.0341% ( 12) 00:08:27.320 20467.397 - 20568.222: 94.1761% ( 11) 00:08:27.320 20568.222 - 20669.046: 94.3311% ( 12) 00:08:27.320 20669.046 - 20769.871: 94.6668% ( 26) 00:08:27.320 20769.871 - 20870.695: 94.8476% ( 14) 00:08:27.321 20870.695 - 20971.520: 94.9897% ( 11) 00:08:27.321 20971.520 - 21072.345: 95.1188% ( 10) 00:08:27.321 21072.345 - 21173.169: 95.2221% ( 8) 00:08:27.321 21173.169 - 21273.994: 95.3771% ( 12) 00:08:27.321 21273.994 - 21374.818: 95.5966% ( 17) 00:08:27.321 21374.818 - 21475.643: 95.9969% ( 31) 00:08:27.321 21475.643 - 21576.468: 96.3326% ( 26) 00:08:27.321 21576.468 - 21677.292: 96.6555% ( 25) 00:08:27.321 21677.292 - 21778.117: 96.9267% ( 21) 00:08:27.321 21778.117 - 21878.942: 97.0170% ( 7) 00:08:27.321 21878.942 - 21979.766: 97.0945% ( 6) 00:08:27.321 21979.766 - 22080.591: 97.2753% ( 14) 00:08:27.321 22080.591 - 22181.415: 97.4044% ( 10) 00:08:27.321 22181.415 - 22282.240: 97.5594% ( 12) 00:08:27.321 22282.240 - 22383.065: 97.7402% ( 14) 00:08:27.321 22383.065 - 22483.889: 97.9597% ( 17) 00:08:27.321 22483.889 - 22584.714: 98.0888% ( 10) 00:08:27.321 22584.714 - 22685.538: 98.1663% ( 6) 00:08:27.321 22685.538 - 22786.363: 98.2309% ( 5) 00:08:27.321 22786.363 - 22887.188: 98.2825% ( 4) 00:08:27.321 22887.188 - 22988.012: 98.3342% ( 4) 00:08:27.321 22988.012 - 23088.837: 98.3471% ( 1) 00:08:27.321 27020.997 - 27222.646: 98.4504% ( 8) 00:08:27.321 27222.646 - 27424.295: 98.5925% ( 11) 00:08:27.321 27424.295 - 27625.945: 98.7216% ( 10) 00:08:27.321 27625.945 - 27827.594: 98.8765% ( 12) 00:08:27.321 27827.594 - 28029.243: 99.0186% ( 11) 00:08:27.321 28029.243 - 28230.892: 99.1606% ( 11) 00:08:27.321 28230.892 - 28432.542: 99.1736% ( 1) 00:08:27.321 36296.862 - 36498.511: 99.2123% ( 3) 00:08:27.321 36498.511 - 36700.160: 99.2639% ( 4) 00:08:27.321 36700.160 - 36901.809: 99.3414% ( 6) 00:08:27.321 36901.809 - 37103.458: 99.4189% ( 6) 00:08:27.321 37103.458 - 37305.108: 99.4964% ( 6) 00:08:27.321 37305.108 - 37506.757: 99.5739% ( 6) 00:08:27.321 37506.757 - 37708.406: 99.6513% ( 6) 00:08:27.321 37708.406 - 37910.055: 99.7417% ( 7) 00:08:27.321 37910.055 - 38111.705: 99.8192% ( 6) 00:08:27.321 38111.705 - 38313.354: 99.8967% ( 6) 00:08:27.321 38313.354 - 38515.003: 99.9871% ( 7) 00:08:27.321 38515.003 - 38716.652: 100.0000% ( 1) 00:08:27.321 00:08:27.321 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:27.321 ============================================================================== 00:08:27.321 Range in us Cumulative IO count 00:08:27.321 9023.803 - 9074.215: 0.0129% ( 1) 00:08:27.321 9074.215 - 9124.628: 0.0387% ( 2) 00:08:27.321 9124.628 - 9175.040: 0.0775% ( 3) 00:08:27.321 9175.040 - 9225.452: 0.1808% ( 8) 00:08:27.321 9225.452 - 9275.865: 0.3487% ( 13) 00:08:27.321 9275.865 - 9326.277: 0.4649% ( 9) 00:08:27.321 9326.277 - 9376.689: 0.5165% ( 4) 00:08:27.321 9376.689 - 9427.102: 0.5424% ( 2) 00:08:27.321 9427.102 - 9477.514: 0.5811% ( 3) 00:08:27.321 9477.514 - 9527.926: 0.6198% ( 3) 00:08:27.321 9527.926 - 9578.338: 0.6586% ( 3) 00:08:27.321 9578.338 - 9628.751: 0.6973% ( 3) 00:08:27.321 9628.751 - 9679.163: 0.7231% ( 2) 00:08:27.321 9679.163 - 9729.575: 0.7748% ( 4) 00:08:27.321 9729.575 - 9779.988: 0.8006% ( 2) 00:08:27.321 9779.988 - 9830.400: 0.8264% ( 2) 00:08:27.321 12300.603 - 12351.015: 0.8781% ( 4) 00:08:27.321 12351.015 - 12401.428: 0.9168% ( 3) 00:08:27.321 12401.428 - 12451.840: 0.9556% ( 3) 00:08:27.321 12451.840 - 12502.252: 1.0072% ( 4) 00:08:27.321 12502.252 - 12552.665: 1.1105% ( 8) 00:08:27.321 12552.665 - 12603.077: 1.2009% ( 7) 00:08:27.321 12603.077 - 12653.489: 1.3688% ( 13) 00:08:27.321 12653.489 - 12703.902: 1.4463% ( 6) 00:08:27.321 12703.902 - 12754.314: 1.5496% ( 8) 00:08:27.321 12754.314 - 12804.726: 1.6916% ( 11) 00:08:27.321 12804.726 - 12855.138: 1.8337% ( 11) 00:08:27.321 12855.138 - 12905.551: 2.0790% ( 19) 00:08:27.321 12905.551 - 13006.375: 2.4277% ( 27) 00:08:27.321 13006.375 - 13107.200: 2.9313% ( 39) 00:08:27.321 13107.200 - 13208.025: 4.0548% ( 87) 00:08:27.321 13208.025 - 13308.849: 4.7650% ( 55) 00:08:27.321 13308.849 - 13409.674: 5.5914% ( 64) 00:08:27.321 13409.674 - 13510.498: 6.5470% ( 74) 00:08:27.321 13510.498 - 13611.323: 7.4638% ( 71) 00:08:27.321 13611.323 - 13712.148: 8.2386% ( 60) 00:08:27.321 13712.148 - 13812.972: 9.2588% ( 79) 00:08:27.321 13812.972 - 13913.797: 10.2531% ( 77) 00:08:27.321 13913.797 - 14014.622: 10.9633% ( 55) 00:08:27.321 14014.622 - 14115.446: 12.0739% ( 86) 00:08:27.321 14115.446 - 14216.271: 13.4556% ( 107) 00:08:27.321 14216.271 - 14317.095: 15.3022% ( 143) 00:08:27.321 14317.095 - 14417.920: 16.6968% ( 108) 00:08:27.321 14417.920 - 14518.745: 18.0527% ( 105) 00:08:27.321 14518.745 - 14619.569: 19.3569% ( 101) 00:08:27.321 14619.569 - 14720.394: 20.9452% ( 123) 00:08:27.321 14720.394 - 14821.218: 22.1462% ( 93) 00:08:27.321 14821.218 - 14922.043: 23.3988% ( 97) 00:08:27.321 14922.043 - 15022.868: 24.5351% ( 88) 00:08:27.321 15022.868 - 15123.692: 25.9168% ( 107) 00:08:27.321 15123.692 - 15224.517: 27.7247% ( 140) 00:08:27.321 15224.517 - 15325.342: 29.3518% ( 126) 00:08:27.321 15325.342 - 15426.166: 31.6374% ( 177) 00:08:27.321 15426.166 - 15526.991: 34.1167% ( 192) 00:08:27.321 15526.991 - 15627.815: 36.6219% ( 194) 00:08:27.321 15627.815 - 15728.640: 38.5589% ( 150) 00:08:27.321 15728.640 - 15829.465: 41.1415% ( 200) 00:08:27.321 15829.465 - 15930.289: 43.4272% ( 177) 00:08:27.321 15930.289 - 16031.114: 46.1648% ( 212) 00:08:27.321 16031.114 - 16131.938: 48.6699% ( 194) 00:08:27.321 16131.938 - 16232.763: 51.6142% ( 228) 00:08:27.321 16232.763 - 16333.588: 53.6544% ( 158) 00:08:27.321 16333.588 - 16434.412: 55.3461% ( 131) 00:08:27.321 16434.412 - 16535.237: 56.8440% ( 116) 00:08:27.321 16535.237 - 16636.062: 58.1741% ( 103) 00:08:27.321 16636.062 - 16736.886: 59.2071% ( 80) 00:08:27.321 16736.886 - 16837.711: 60.6921% ( 115) 00:08:27.321 16837.711 - 16938.535: 62.3450% ( 128) 00:08:27.321 16938.535 - 17039.360: 63.4814% ( 88) 00:08:27.321 17039.360 - 17140.185: 64.7469% ( 98) 00:08:27.321 17140.185 - 17241.009: 66.4127% ( 129) 00:08:27.321 17241.009 - 17341.834: 68.0527% ( 127) 00:08:27.321 17341.834 - 17442.658: 69.4473% ( 108) 00:08:27.321 17442.658 - 17543.483: 71.1389% ( 131) 00:08:27.321 17543.483 - 17644.308: 72.9468% ( 140) 00:08:27.321 17644.308 - 17745.132: 74.3156% ( 106) 00:08:27.321 17745.132 - 17845.957: 75.6327% ( 102) 00:08:27.321 17845.957 - 17946.782: 77.2856% ( 128) 00:08:27.321 17946.782 - 18047.606: 78.4091% ( 87) 00:08:27.321 18047.606 - 18148.431: 79.5842% ( 91) 00:08:27.321 18148.431 - 18249.255: 80.6302% ( 81) 00:08:27.321 18249.255 - 18350.080: 82.2056% ( 122) 00:08:27.321 18350.080 - 18450.905: 83.6002% ( 108) 00:08:27.321 18450.905 - 18551.729: 85.1111% ( 117) 00:08:27.321 18551.729 - 18652.554: 86.7381% ( 126) 00:08:27.321 18652.554 - 18753.378: 88.0165% ( 99) 00:08:27.321 18753.378 - 18854.203: 88.9463% ( 72) 00:08:27.321 18854.203 - 18955.028: 89.7856% ( 65) 00:08:27.321 18955.028 - 19055.852: 90.5088% ( 56) 00:08:27.321 19055.852 - 19156.677: 91.1286% ( 48) 00:08:27.321 19156.677 - 19257.502: 91.5289% ( 31) 00:08:27.321 19257.502 - 19358.326: 91.8905% ( 28) 00:08:27.321 19358.326 - 19459.151: 92.2392% ( 27) 00:08:27.321 19459.151 - 19559.975: 92.4716% ( 18) 00:08:27.321 19559.975 - 19660.800: 92.5491% ( 6) 00:08:27.321 19660.800 - 19761.625: 92.6653% ( 9) 00:08:27.321 19761.625 - 19862.449: 92.8202% ( 12) 00:08:27.321 19862.449 - 19963.274: 92.9752% ( 12) 00:08:27.321 19963.274 - 20064.098: 93.3755% ( 31) 00:08:27.321 20064.098 - 20164.923: 93.6854% ( 24) 00:08:27.321 20164.923 - 20265.748: 93.8146% ( 10) 00:08:27.321 20265.748 - 20366.572: 93.9308% ( 9) 00:08:27.321 20366.572 - 20467.397: 94.0599% ( 10) 00:08:27.321 20467.397 - 20568.222: 94.1761% ( 9) 00:08:27.321 20568.222 - 20669.046: 94.3698% ( 15) 00:08:27.321 20669.046 - 20769.871: 94.5119% ( 11) 00:08:27.321 20769.871 - 20870.695: 94.7314% ( 17) 00:08:27.321 20870.695 - 20971.520: 95.0284% ( 23) 00:08:27.321 20971.520 - 21072.345: 95.2350% ( 16) 00:08:27.321 21072.345 - 21173.169: 95.3642% ( 10) 00:08:27.321 21173.169 - 21273.994: 95.4545% ( 7) 00:08:27.321 21273.994 - 21374.818: 95.5837% ( 10) 00:08:27.321 21374.818 - 21475.643: 95.8549% ( 21) 00:08:27.321 21475.643 - 21576.468: 96.0486% ( 15) 00:08:27.321 21576.468 - 21677.292: 96.3456% ( 23) 00:08:27.321 21677.292 - 21778.117: 96.7200% ( 29) 00:08:27.321 21778.117 - 21878.942: 97.3140% ( 46) 00:08:27.321 21878.942 - 21979.766: 97.5981% ( 22) 00:08:27.321 21979.766 - 22080.591: 97.8306% ( 18) 00:08:27.321 22080.591 - 22181.415: 97.9726% ( 11) 00:08:27.321 22181.415 - 22282.240: 98.1147% ( 11) 00:08:27.321 22282.240 - 22383.065: 98.2051% ( 7) 00:08:27.321 22383.065 - 22483.889: 98.2955% ( 7) 00:08:27.321 22483.889 - 22584.714: 98.3342% ( 3) 00:08:27.321 22584.714 - 22685.538: 98.3471% ( 1) 00:08:27.321 28029.243 - 28230.892: 98.3988% ( 4) 00:08:27.321 28230.892 - 28432.542: 98.4633% ( 5) 00:08:27.321 28432.542 - 28634.191: 98.5537% ( 7) 00:08:27.321 28634.191 - 28835.840: 98.6570% ( 8) 00:08:27.321 28835.840 - 29037.489: 98.7991% ( 11) 00:08:27.321 29037.489 - 29239.138: 98.9282% ( 10) 00:08:27.321 29239.138 - 29440.788: 99.0573% ( 10) 00:08:27.321 29440.788 - 29642.437: 99.1736% ( 9) 00:08:27.321 37708.406 - 37910.055: 99.1865% ( 1) 00:08:27.321 37910.055 - 38111.705: 99.2510% ( 5) 00:08:27.321 38111.705 - 38313.354: 99.3285% ( 6) 00:08:27.321 38313.354 - 38515.003: 99.4060% ( 6) 00:08:27.321 38515.003 - 38716.652: 99.4706% ( 5) 00:08:27.321 38716.652 - 38918.302: 99.5610% ( 7) 00:08:27.321 38918.302 - 39119.951: 99.6384% ( 6) 00:08:27.321 39119.951 - 39321.600: 99.7159% ( 6) 00:08:27.322 39321.600 - 39523.249: 99.8063% ( 7) 00:08:27.322 39523.249 - 39724.898: 99.8838% ( 6) 00:08:27.322 39724.898 - 39926.548: 99.9613% ( 6) 00:08:27.322 39926.548 - 40128.197: 100.0000% ( 3) 00:08:27.322 00:08:27.322 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:27.322 ============================================================================== 00:08:27.322 Range in us Cumulative IO count 00:08:27.322 7914.732 - 7965.145: 0.0129% ( 1) 00:08:27.322 8065.969 - 8116.382: 0.0517% ( 3) 00:08:27.322 8116.382 - 8166.794: 0.1291% ( 6) 00:08:27.322 8166.794 - 8217.206: 0.2712% ( 11) 00:08:27.322 8217.206 - 8267.618: 0.4649% ( 15) 00:08:27.322 8267.618 - 8318.031: 0.5165% ( 4) 00:08:27.322 8318.031 - 8368.443: 0.5682% ( 4) 00:08:27.322 8368.443 - 8418.855: 0.6069% ( 3) 00:08:27.322 8418.855 - 8469.268: 0.6327% ( 2) 00:08:27.322 8469.268 - 8519.680: 0.6715% ( 3) 00:08:27.322 8519.680 - 8570.092: 0.7102% ( 3) 00:08:27.322 8570.092 - 8620.505: 0.7361% ( 2) 00:08:27.322 8620.505 - 8670.917: 0.7748% ( 3) 00:08:27.322 8670.917 - 8721.329: 0.8135% ( 3) 00:08:27.322 8721.329 - 8771.742: 0.8264% ( 1) 00:08:27.322 11897.305 - 11947.717: 0.8394% ( 1) 00:08:27.322 12098.954 - 12149.366: 0.8781% ( 3) 00:08:27.322 12149.366 - 12199.778: 0.9556% ( 6) 00:08:27.322 12199.778 - 12250.191: 1.0072% ( 4) 00:08:27.322 12250.191 - 12300.603: 1.0847% ( 6) 00:08:27.322 12300.603 - 12351.015: 1.1493% ( 5) 00:08:27.322 12351.015 - 12401.428: 1.2655% ( 9) 00:08:27.322 12401.428 - 12451.840: 1.3559% ( 7) 00:08:27.322 12451.840 - 12502.252: 1.4075% ( 4) 00:08:27.322 12502.252 - 12552.665: 1.4592% ( 4) 00:08:27.322 12552.665 - 12603.077: 1.4850% ( 2) 00:08:27.322 12603.077 - 12653.489: 1.5238% ( 3) 00:08:27.322 12653.489 - 12703.902: 1.5625% ( 3) 00:08:27.322 12703.902 - 12754.314: 1.6271% ( 5) 00:08:27.322 12754.314 - 12804.726: 1.7045% ( 6) 00:08:27.322 12804.726 - 12855.138: 1.7820% ( 6) 00:08:27.322 12855.138 - 12905.551: 1.8982% ( 9) 00:08:27.322 12905.551 - 13006.375: 2.3115% ( 32) 00:08:27.322 13006.375 - 13107.200: 2.8538% ( 42) 00:08:27.322 13107.200 - 13208.025: 3.5640% ( 55) 00:08:27.322 13208.025 - 13308.849: 4.3518% ( 61) 00:08:27.322 13308.849 - 13409.674: 5.3461% ( 77) 00:08:27.322 13409.674 - 13510.498: 6.4566% ( 86) 00:08:27.322 13510.498 - 13611.323: 7.4509% ( 77) 00:08:27.322 13611.323 - 13712.148: 8.5486% ( 85) 00:08:27.322 13712.148 - 13812.972: 9.8657% ( 102) 00:08:27.322 13812.972 - 13913.797: 11.1829% ( 102) 00:08:27.322 13913.797 - 14014.622: 12.6291% ( 112) 00:08:27.322 14014.622 - 14115.446: 13.7268% ( 85) 00:08:27.322 14115.446 - 14216.271: 14.8502% ( 87) 00:08:27.322 14216.271 - 14317.095: 15.9995% ( 89) 00:08:27.322 14317.095 - 14417.920: 17.3166% ( 102) 00:08:27.322 14417.920 - 14518.745: 18.3497% ( 80) 00:08:27.322 14518.745 - 14619.569: 19.6023% ( 97) 00:08:27.322 14619.569 - 14720.394: 20.8161% ( 94) 00:08:27.322 14720.394 - 14821.218: 22.3399% ( 118) 00:08:27.322 14821.218 - 14922.043: 23.6699% ( 103) 00:08:27.322 14922.043 - 15022.868: 25.3099% ( 127) 00:08:27.322 15022.868 - 15123.692: 26.7820% ( 114) 00:08:27.322 15123.692 - 15224.517: 28.2800% ( 116) 00:08:27.322 15224.517 - 15325.342: 29.8683% ( 123) 00:08:27.322 15325.342 - 15426.166: 31.7407% ( 145) 00:08:27.322 15426.166 - 15526.991: 33.5356% ( 139) 00:08:27.322 15526.991 - 15627.815: 35.3822% ( 143) 00:08:27.322 15627.815 - 15728.640: 37.4613% ( 161) 00:08:27.322 15728.640 - 15829.465: 39.1400% ( 130) 00:08:27.322 15829.465 - 15930.289: 41.1286% ( 154) 00:08:27.322 15930.289 - 16031.114: 43.0914% ( 152) 00:08:27.322 16031.114 - 16131.938: 45.0930% ( 155) 00:08:27.322 16131.938 - 16232.763: 47.4690% ( 184) 00:08:27.322 16232.763 - 16333.588: 50.4390% ( 230) 00:08:27.322 16333.588 - 16434.412: 52.7505% ( 179) 00:08:27.322 16434.412 - 16535.237: 55.1524% ( 186) 00:08:27.322 16535.237 - 16636.062: 57.6963% ( 197) 00:08:27.322 16636.062 - 16736.886: 59.9690% ( 176) 00:08:27.322 16736.886 - 16837.711: 61.8027% ( 142) 00:08:27.322 16837.711 - 16938.535: 63.4168% ( 125) 00:08:27.322 16938.535 - 17039.360: 64.9923% ( 122) 00:08:27.322 17039.360 - 17140.185: 66.4902% ( 116) 00:08:27.322 17140.185 - 17241.009: 67.5103% ( 79) 00:08:27.322 17241.009 - 17341.834: 68.6338% ( 87) 00:08:27.322 17341.834 - 17442.658: 69.7572% ( 87) 00:08:27.322 17442.658 - 17543.483: 71.1648% ( 109) 00:08:27.322 17543.483 - 17644.308: 72.4948% ( 103) 00:08:27.322 17644.308 - 17745.132: 73.6570% ( 90) 00:08:27.322 17745.132 - 17845.957: 74.8838% ( 95) 00:08:27.322 17845.957 - 17946.782: 76.4979% ( 125) 00:08:27.322 17946.782 - 18047.606: 78.4995% ( 155) 00:08:27.322 18047.606 - 18148.431: 79.8812% ( 107) 00:08:27.322 18148.431 - 18249.255: 81.6245% ( 135) 00:08:27.322 18249.255 - 18350.080: 83.1612% ( 119) 00:08:27.322 18350.080 - 18450.905: 84.7495% ( 123) 00:08:27.322 18450.905 - 18551.729: 86.1183% ( 106) 00:08:27.322 18551.729 - 18652.554: 87.3709% ( 97) 00:08:27.322 18652.554 - 18753.378: 88.4943% ( 87) 00:08:27.322 18753.378 - 18854.203: 89.3208% ( 64) 00:08:27.322 18854.203 - 18955.028: 90.1214% ( 62) 00:08:27.322 18955.028 - 19055.852: 90.5733% ( 35) 00:08:27.322 19055.852 - 19156.677: 90.8833% ( 24) 00:08:27.322 19156.677 - 19257.502: 91.0899% ( 16) 00:08:27.322 19257.502 - 19358.326: 91.5031% ( 32) 00:08:27.322 19358.326 - 19459.151: 91.8388% ( 26) 00:08:27.322 19459.151 - 19559.975: 92.0713% ( 18) 00:08:27.322 19559.975 - 19660.800: 92.5103% ( 34) 00:08:27.322 19660.800 - 19761.625: 92.7169% ( 16) 00:08:27.322 19761.625 - 19862.449: 92.8332% ( 9) 00:08:27.322 19862.449 - 19963.274: 92.9365% ( 8) 00:08:27.322 19963.274 - 20064.098: 93.0398% ( 8) 00:08:27.322 20064.098 - 20164.923: 93.1431% ( 8) 00:08:27.322 20164.923 - 20265.748: 93.3110% ( 13) 00:08:27.322 20265.748 - 20366.572: 93.5305% ( 17) 00:08:27.322 20366.572 - 20467.397: 93.8146% ( 22) 00:08:27.322 20467.397 - 20568.222: 94.1374% ( 25) 00:08:27.322 20568.222 - 20669.046: 94.4861% ( 27) 00:08:27.322 20669.046 - 20769.871: 94.8218% ( 26) 00:08:27.322 20769.871 - 20870.695: 95.0671% ( 19) 00:08:27.322 20870.695 - 20971.520: 95.3771% ( 24) 00:08:27.322 20971.520 - 21072.345: 95.5708% ( 15) 00:08:27.322 21072.345 - 21173.169: 95.8419% ( 21) 00:08:27.322 21173.169 - 21273.994: 96.1519% ( 24) 00:08:27.322 21273.994 - 21374.818: 96.6038% ( 35) 00:08:27.322 21374.818 - 21475.643: 97.0300% ( 33) 00:08:27.322 21475.643 - 21576.468: 97.1591% ( 10) 00:08:27.322 21576.468 - 21677.292: 97.2624% ( 8) 00:08:27.322 21677.292 - 21778.117: 97.3657% ( 8) 00:08:27.322 21778.117 - 21878.942: 97.4432% ( 6) 00:08:27.322 21878.942 - 21979.766: 97.6369% ( 15) 00:08:27.322 21979.766 - 22080.591: 97.8177% ( 14) 00:08:27.322 22080.591 - 22181.415: 98.0501% ( 18) 00:08:27.322 22181.415 - 22282.240: 98.2180% ( 13) 00:08:27.322 22282.240 - 22383.065: 98.2955% ( 6) 00:08:27.322 22383.065 - 22483.889: 98.3471% ( 4) 00:08:27.322 27424.295 - 27625.945: 98.4117% ( 5) 00:08:27.322 27625.945 - 27827.594: 98.5537% ( 11) 00:08:27.322 27827.594 - 28029.243: 98.6958% ( 11) 00:08:27.322 28029.243 - 28230.892: 98.7862% ( 7) 00:08:27.322 28230.892 - 28432.542: 98.9282% ( 11) 00:08:27.322 28432.542 - 28634.191: 99.0702% ( 11) 00:08:27.322 28634.191 - 28835.840: 99.1736% ( 8) 00:08:27.322 38313.354 - 38515.003: 99.2769% ( 8) 00:08:27.322 38515.003 - 38716.652: 99.3931% ( 9) 00:08:27.322 38716.652 - 38918.302: 99.5351% ( 11) 00:08:27.322 38918.302 - 39119.951: 99.6643% ( 10) 00:08:27.322 39119.951 - 39321.600: 99.8192% ( 12) 00:08:27.322 39321.600 - 39523.249: 99.9483% ( 10) 00:08:27.322 39523.249 - 39724.898: 100.0000% ( 4) 00:08:27.322 00:08:27.322 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:27.322 ============================================================================== 00:08:27.322 Range in us Cumulative IO count 00:08:27.322 6805.662 - 6856.074: 0.0129% ( 1) 00:08:27.322 6956.898 - 7007.311: 0.0387% ( 2) 00:08:27.322 7007.311 - 7057.723: 0.0904% ( 4) 00:08:27.322 7057.723 - 7108.135: 0.1550% ( 5) 00:08:27.322 7108.135 - 7158.548: 0.2583% ( 8) 00:08:27.322 7158.548 - 7208.960: 0.5165% ( 20) 00:08:27.322 7208.960 - 7259.372: 0.6069% ( 7) 00:08:27.322 7259.372 - 7309.785: 0.6715% ( 5) 00:08:27.322 7309.785 - 7360.197: 0.7102% ( 3) 00:08:27.322 7360.197 - 7410.609: 0.7490% ( 3) 00:08:27.322 7410.609 - 7461.022: 0.7877% ( 3) 00:08:27.322 7461.022 - 7511.434: 0.8135% ( 2) 00:08:27.322 7511.434 - 7561.846: 0.8264% ( 1) 00:08:27.322 12603.077 - 12653.489: 0.8652% ( 3) 00:08:27.322 12653.489 - 12703.902: 0.9814% ( 9) 00:08:27.322 12703.902 - 12754.314: 1.1493% ( 13) 00:08:27.322 12754.314 - 12804.726: 1.3171% ( 13) 00:08:27.322 12804.726 - 12855.138: 1.4850% ( 13) 00:08:27.322 12855.138 - 12905.551: 1.8466% ( 28) 00:08:27.322 12905.551 - 13006.375: 2.4535% ( 47) 00:08:27.322 13006.375 - 13107.200: 3.4866% ( 80) 00:08:27.322 13107.200 - 13208.025: 4.2485% ( 59) 00:08:27.322 13208.025 - 13308.849: 5.1395% ( 69) 00:08:27.322 13308.849 - 13409.674: 6.0305% ( 69) 00:08:27.322 13409.674 - 13510.498: 7.5026% ( 114) 00:08:27.322 13510.498 - 13611.323: 8.4969% ( 77) 00:08:27.322 13611.323 - 13712.148: 9.5687% ( 83) 00:08:27.322 13712.148 - 13812.972: 10.6663% ( 85) 00:08:27.322 13812.972 - 13913.797: 11.6219% ( 74) 00:08:27.322 13913.797 - 14014.622: 12.5646% ( 73) 00:08:27.322 14014.622 - 14115.446: 13.4685% ( 70) 00:08:27.323 14115.446 - 14216.271: 14.5145% ( 81) 00:08:27.323 14216.271 - 14317.095: 15.6896% ( 91) 00:08:27.323 14317.095 - 14417.920: 16.9680% ( 99) 00:08:27.323 14417.920 - 14518.745: 18.5176% ( 120) 00:08:27.323 14518.745 - 14619.569: 19.7572% ( 96) 00:08:27.323 14619.569 - 14720.394: 21.2164% ( 113) 00:08:27.323 14720.394 - 14821.218: 23.0243% ( 140) 00:08:27.323 14821.218 - 14922.043: 24.3543% ( 103) 00:08:27.323 14922.043 - 15022.868: 25.9039% ( 120) 00:08:27.323 15022.868 - 15123.692: 27.9959% ( 162) 00:08:27.323 15123.692 - 15224.517: 29.5713% ( 122) 00:08:27.323 15224.517 - 15325.342: 31.2113% ( 127) 00:08:27.323 15325.342 - 15426.166: 32.7221% ( 117) 00:08:27.323 15426.166 - 15526.991: 34.3363% ( 125) 00:08:27.323 15526.991 - 15627.815: 36.4153% ( 161) 00:08:27.323 15627.815 - 15728.640: 38.6880% ( 176) 00:08:27.323 15728.640 - 15829.465: 41.0253% ( 181) 00:08:27.323 15829.465 - 15930.289: 42.9494% ( 149) 00:08:27.323 15930.289 - 16031.114: 45.0284% ( 161) 00:08:27.323 16031.114 - 16131.938: 47.2753% ( 174) 00:08:27.323 16131.938 - 16232.763: 48.9669% ( 131) 00:08:27.323 16232.763 - 16333.588: 50.8910% ( 149) 00:08:27.323 16333.588 - 16434.412: 53.5899% ( 209) 00:08:27.323 16434.412 - 16535.237: 55.6560% ( 160) 00:08:27.323 16535.237 - 16636.062: 57.5413% ( 146) 00:08:27.323 16636.062 - 16736.886: 60.0852% ( 197) 00:08:27.323 16736.886 - 16837.711: 62.3321% ( 174) 00:08:27.323 16837.711 - 16938.535: 64.0367% ( 132) 00:08:27.323 16938.535 - 17039.360: 65.3926% ( 105) 00:08:27.323 17039.360 - 17140.185: 66.3998% ( 78) 00:08:27.323 17140.185 - 17241.009: 67.2908% ( 69) 00:08:27.323 17241.009 - 17341.834: 68.3239% ( 80) 00:08:27.323 17341.834 - 17442.658: 69.4344% ( 86) 00:08:27.323 17442.658 - 17543.483: 70.5966% ( 90) 00:08:27.323 17543.483 - 17644.308: 71.7200% ( 87) 00:08:27.323 17644.308 - 17745.132: 72.7014% ( 76) 00:08:27.323 17745.132 - 17845.957: 73.7216% ( 79) 00:08:27.323 17845.957 - 17946.782: 74.5739% ( 66) 00:08:27.323 17946.782 - 18047.606: 75.9298% ( 105) 00:08:27.323 18047.606 - 18148.431: 77.5310% ( 124) 00:08:27.323 18148.431 - 18249.255: 79.5971% ( 160) 00:08:27.323 18249.255 - 18350.080: 81.4566% ( 144) 00:08:27.323 18350.080 - 18450.905: 82.8383% ( 107) 00:08:27.323 18450.905 - 18551.729: 84.6204% ( 138) 00:08:27.323 18551.729 - 18652.554: 86.4282% ( 140) 00:08:27.323 18652.554 - 18753.378: 87.9778% ( 120) 00:08:27.323 18753.378 - 18854.203: 89.0625% ( 84) 00:08:27.323 18854.203 - 18955.028: 89.8115% ( 58) 00:08:27.323 18955.028 - 19055.852: 90.5863% ( 60) 00:08:27.323 19055.852 - 19156.677: 90.9349% ( 27) 00:08:27.323 19156.677 - 19257.502: 91.2319% ( 23) 00:08:27.323 19257.502 - 19358.326: 91.5031% ( 21) 00:08:27.323 19358.326 - 19459.151: 91.7226% ( 17) 00:08:27.323 19459.151 - 19559.975: 91.8130% ( 7) 00:08:27.323 19559.975 - 19660.800: 91.8905% ( 6) 00:08:27.323 19660.800 - 19761.625: 92.0713% ( 14) 00:08:27.323 19761.625 - 19862.449: 92.2779% ( 16) 00:08:27.323 19862.449 - 19963.274: 92.5362% ( 20) 00:08:27.323 19963.274 - 20064.098: 92.8332% ( 23) 00:08:27.323 20064.098 - 20164.923: 93.3884% ( 43) 00:08:27.323 20164.923 - 20265.748: 93.6725% ( 22) 00:08:27.323 20265.748 - 20366.572: 94.0470% ( 29) 00:08:27.323 20366.572 - 20467.397: 94.3957% ( 27) 00:08:27.323 20467.397 - 20568.222: 94.6927% ( 23) 00:08:27.323 20568.222 - 20669.046: 95.1963% ( 39) 00:08:27.323 20669.046 - 20769.871: 95.4416% ( 19) 00:08:27.323 20769.871 - 20870.695: 95.6095% ( 13) 00:08:27.323 20870.695 - 20971.520: 95.8290% ( 17) 00:08:27.323 20971.520 - 21072.345: 96.0227% ( 15) 00:08:27.323 21072.345 - 21173.169: 96.1906% ( 13) 00:08:27.323 21173.169 - 21273.994: 96.3068% ( 9) 00:08:27.323 21273.994 - 21374.818: 96.4489% ( 11) 00:08:27.323 21374.818 - 21475.643: 96.6038% ( 12) 00:08:27.323 21475.643 - 21576.468: 96.7459% ( 11) 00:08:27.323 21576.468 - 21677.292: 96.8879% ( 11) 00:08:27.323 21677.292 - 21778.117: 97.0429% ( 12) 00:08:27.323 21778.117 - 21878.942: 97.2495% ( 16) 00:08:27.323 21878.942 - 21979.766: 97.4174% ( 13) 00:08:27.323 21979.766 - 22080.591: 97.5465% ( 10) 00:08:27.323 22080.591 - 22181.415: 97.6756% ( 10) 00:08:27.323 22181.415 - 22282.240: 97.8564% ( 14) 00:08:27.323 22282.240 - 22383.065: 98.1018% ( 19) 00:08:27.323 22383.065 - 22483.889: 98.2180% ( 9) 00:08:27.323 22483.889 - 22584.714: 98.3084% ( 7) 00:08:27.323 22584.714 - 22685.538: 98.3471% ( 3) 00:08:27.323 27222.646 - 27424.295: 98.3858% ( 3) 00:08:27.323 27424.295 - 27625.945: 98.5021% ( 9) 00:08:27.323 27625.945 - 27827.594: 98.6312% ( 10) 00:08:27.323 27827.594 - 28029.243: 98.7732% ( 11) 00:08:27.323 28029.243 - 28230.892: 98.8895% ( 9) 00:08:27.323 28230.892 - 28432.542: 99.0444% ( 12) 00:08:27.323 28432.542 - 28634.191: 99.1736% ( 10) 00:08:27.323 37910.055 - 38111.705: 99.2252% ( 4) 00:08:27.323 38111.705 - 38313.354: 99.3543% ( 10) 00:08:27.323 38313.354 - 38515.003: 99.4835% ( 10) 00:08:27.323 38515.003 - 38716.652: 99.6126% ( 10) 00:08:27.323 38716.652 - 38918.302: 99.7417% ( 10) 00:08:27.323 38918.302 - 39119.951: 99.8580% ( 9) 00:08:27.323 39119.951 - 39321.600: 99.9354% ( 6) 00:08:27.323 39321.600 - 39523.249: 100.0000% ( 5) 00:08:27.323 00:08:27.323 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:27.323 ============================================================================== 00:08:27.323 Range in us Cumulative IO count 00:08:27.323 6024.271 - 6049.477: 0.0128% ( 1) 00:08:27.323 6049.477 - 6074.683: 0.0512% ( 3) 00:08:27.323 6074.683 - 6099.889: 0.0897% ( 3) 00:08:27.323 6099.889 - 6125.095: 0.1025% ( 1) 00:08:27.323 6125.095 - 6150.302: 0.1281% ( 2) 00:08:27.323 6150.302 - 6175.508: 0.1537% ( 2) 00:08:27.323 6175.508 - 6200.714: 0.1921% ( 3) 00:08:27.323 6200.714 - 6225.920: 0.2177% ( 2) 00:08:27.323 6225.920 - 6251.126: 0.3074% ( 7) 00:08:27.323 6251.126 - 6276.332: 0.4483% ( 11) 00:08:27.323 6276.332 - 6301.538: 0.5379% ( 7) 00:08:27.323 6301.538 - 6326.745: 0.5635% ( 2) 00:08:27.323 6326.745 - 6351.951: 0.5891% ( 2) 00:08:27.323 6351.951 - 6377.157: 0.6019% ( 1) 00:08:27.323 6377.157 - 6402.363: 0.6148% ( 1) 00:08:27.323 6402.363 - 6427.569: 0.6404% ( 2) 00:08:27.323 6427.569 - 6452.775: 0.6660% ( 2) 00:08:27.323 6452.775 - 6503.188: 0.7044% ( 3) 00:08:27.323 6503.188 - 6553.600: 0.7556% ( 4) 00:08:27.323 6553.600 - 6604.012: 0.7941% ( 3) 00:08:27.323 6604.012 - 6654.425: 0.8197% ( 2) 00:08:27.323 12250.191 - 12300.603: 0.8325% ( 1) 00:08:27.323 12300.603 - 12351.015: 0.8709% ( 3) 00:08:27.323 12351.015 - 12401.428: 0.9093% ( 3) 00:08:27.323 12401.428 - 12451.840: 0.9477% ( 3) 00:08:27.323 12451.840 - 12502.252: 0.9990% ( 4) 00:08:27.323 12502.252 - 12552.665: 1.0374% ( 3) 00:08:27.323 12552.665 - 12603.077: 1.1142% ( 6) 00:08:27.323 12603.077 - 12653.489: 1.1911% ( 6) 00:08:27.323 12653.489 - 12703.902: 1.3832% ( 15) 00:08:27.323 12703.902 - 12754.314: 1.6393% ( 20) 00:08:27.323 12754.314 - 12804.726: 1.9211% ( 22) 00:08:27.323 12804.726 - 12855.138: 2.1901% ( 21) 00:08:27.323 12855.138 - 12905.551: 2.4206% ( 18) 00:08:27.323 12905.551 - 13006.375: 2.9969% ( 45) 00:08:27.323 13006.375 - 13107.200: 3.8422% ( 66) 00:08:27.323 13107.200 - 13208.025: 4.7643% ( 72) 00:08:27.323 13208.025 - 13308.849: 5.6993% ( 73) 00:08:27.323 13308.849 - 13409.674: 6.5830% ( 69) 00:08:27.323 13409.674 - 13510.498: 7.3770% ( 62) 00:08:27.323 13510.498 - 13611.323: 8.3760% ( 78) 00:08:27.323 13611.323 - 13712.148: 9.2469% ( 68) 00:08:27.323 13712.148 - 13812.972: 10.1819% ( 73) 00:08:27.323 13812.972 - 13913.797: 11.3858% ( 94) 00:08:27.323 13913.797 - 14014.622: 12.3975% ( 79) 00:08:27.323 14014.622 - 14115.446: 13.9728% ( 123) 00:08:27.323 14115.446 - 14216.271: 15.0359% ( 83) 00:08:27.323 14216.271 - 14317.095: 15.8427% ( 63) 00:08:27.323 14317.095 - 14417.920: 17.0594% ( 95) 00:08:27.323 14417.920 - 14518.745: 18.8012% ( 136) 00:08:27.323 14518.745 - 14619.569: 20.1332% ( 104) 00:08:27.323 14619.569 - 14720.394: 21.3755% ( 97) 00:08:27.323 14720.394 - 14821.218: 22.6050% ( 96) 00:08:27.324 14821.218 - 14922.043: 24.0010% ( 109) 00:08:27.324 14922.043 - 15022.868: 25.2690% ( 99) 00:08:27.324 15022.868 - 15123.692: 26.6906% ( 111) 00:08:27.324 15123.692 - 15224.517: 28.3683% ( 131) 00:08:27.324 15224.517 - 15325.342: 30.0973% ( 135) 00:08:27.324 15325.342 - 15426.166: 32.1081% ( 157) 00:08:27.324 15426.166 - 15526.991: 34.3238% ( 173) 00:08:27.324 15526.991 - 15627.815: 36.3601% ( 159) 00:08:27.324 15627.815 - 15728.640: 38.3709% ( 157) 00:08:27.324 15728.640 - 15829.465: 40.3945% ( 158) 00:08:27.324 15829.465 - 15930.289: 42.7382% ( 183) 00:08:27.324 15930.289 - 16031.114: 44.9539% ( 173) 00:08:27.324 16031.114 - 16131.938: 47.3873% ( 190) 00:08:27.324 16131.938 - 16232.763: 50.8581% ( 271) 00:08:27.324 16232.763 - 16333.588: 53.5476% ( 210) 00:08:27.324 16333.588 - 16434.412: 55.4559% ( 149) 00:08:27.324 16434.412 - 16535.237: 57.8253% ( 185) 00:08:27.324 16535.237 - 16636.062: 59.4390% ( 126) 00:08:27.324 16636.062 - 16736.886: 60.7838% ( 105) 00:08:27.324 16736.886 - 16837.711: 62.3719% ( 124) 00:08:27.324 16837.711 - 16938.535: 63.5886% ( 95) 00:08:27.324 16938.535 - 17039.360: 64.4980% ( 71) 00:08:27.324 17039.360 - 17140.185: 65.2536% ( 59) 00:08:27.324 17140.185 - 17241.009: 66.1501% ( 70) 00:08:27.324 17241.009 - 17341.834: 67.1491% ( 78) 00:08:27.324 17341.834 - 17442.658: 68.6475% ( 117) 00:08:27.324 17442.658 - 17543.483: 70.0948% ( 113) 00:08:27.324 17543.483 - 17644.308: 71.4011% ( 102) 00:08:27.324 17644.308 - 17745.132: 72.8227% ( 111) 00:08:27.324 17745.132 - 17845.957: 74.4365% ( 126) 00:08:27.324 17845.957 - 17946.782: 76.1655% ( 135) 00:08:27.324 17946.782 - 18047.606: 77.2797% ( 87) 00:08:27.324 18047.606 - 18148.431: 78.1762% ( 70) 00:08:27.324 18148.431 - 18249.255: 79.5210% ( 105) 00:08:27.324 18249.255 - 18350.080: 80.6481% ( 88) 00:08:27.324 18350.080 - 18450.905: 81.8007% ( 90) 00:08:27.324 18450.905 - 18551.729: 83.1071% ( 102) 00:08:27.324 18551.729 - 18652.554: 84.5671% ( 114) 00:08:27.324 18652.554 - 18753.378: 86.1040% ( 120) 00:08:27.324 18753.378 - 18854.203: 87.5768% ( 115) 00:08:27.324 18854.203 - 18955.028: 88.2684% ( 54) 00:08:27.324 18955.028 - 19055.852: 89.1650% ( 70) 00:08:27.324 19055.852 - 19156.677: 89.9078% ( 58) 00:08:27.324 19156.677 - 19257.502: 90.6506% ( 58) 00:08:27.324 19257.502 - 19358.326: 91.1245% ( 37) 00:08:27.324 19358.326 - 19459.151: 91.7008% ( 45) 00:08:27.324 19459.151 - 19559.975: 92.1235% ( 33) 00:08:27.324 19559.975 - 19660.800: 93.0712% ( 74) 00:08:27.324 19660.800 - 19761.625: 93.6091% ( 42) 00:08:27.324 19761.625 - 19862.449: 93.9549% ( 27) 00:08:27.324 19862.449 - 19963.274: 94.2239% ( 21) 00:08:27.324 19963.274 - 20064.098: 94.5056% ( 22) 00:08:27.324 20064.098 - 20164.923: 94.6977% ( 15) 00:08:27.324 20164.923 - 20265.748: 94.8130% ( 9) 00:08:27.324 20265.748 - 20366.572: 94.8899% ( 6) 00:08:27.324 20366.572 - 20467.397: 95.0179% ( 10) 00:08:27.324 20467.397 - 20568.222: 95.2613% ( 19) 00:08:27.324 20568.222 - 20669.046: 95.3893% ( 10) 00:08:27.324 20669.046 - 20769.871: 95.5046% ( 9) 00:08:27.324 20769.871 - 20870.695: 95.6583% ( 12) 00:08:27.324 20870.695 - 20971.520: 96.2731% ( 48) 00:08:27.324 20971.520 - 21072.345: 96.6189% ( 27) 00:08:27.324 21072.345 - 21173.169: 96.9647% ( 27) 00:08:27.324 21173.169 - 21273.994: 97.2208% ( 20) 00:08:27.324 21273.994 - 21374.818: 97.3745% ( 12) 00:08:27.324 21374.818 - 21475.643: 97.5154% ( 11) 00:08:27.324 21475.643 - 21576.468: 97.6306% ( 9) 00:08:27.324 21576.468 - 21677.292: 97.7331% ( 8) 00:08:27.324 21677.292 - 21778.117: 97.8868% ( 12) 00:08:27.324 21778.117 - 21878.942: 98.0149% ( 10) 00:08:27.324 21878.942 - 21979.766: 98.1045% ( 7) 00:08:27.324 21979.766 - 22080.591: 98.2710% ( 13) 00:08:27.324 22080.591 - 22181.415: 98.5656% ( 23) 00:08:27.324 22181.415 - 22282.240: 98.7193% ( 12) 00:08:27.324 22282.240 - 22383.065: 98.8601% ( 11) 00:08:27.324 22383.065 - 22483.889: 98.9498% ( 7) 00:08:27.324 22483.889 - 22584.714: 99.0010% ( 4) 00:08:27.324 22584.714 - 22685.538: 99.0394% ( 3) 00:08:27.324 22685.538 - 22786.363: 99.0907% ( 4) 00:08:27.324 22786.363 - 22887.188: 99.1419% ( 4) 00:08:27.324 22887.188 - 22988.012: 99.1803% ( 3) 00:08:27.324 28029.243 - 28230.892: 99.2828% ( 8) 00:08:27.324 28230.892 - 28432.542: 99.4109% ( 10) 00:08:27.324 28432.542 - 28634.191: 99.5389% ( 10) 00:08:27.324 28634.191 - 28835.840: 99.6798% ( 11) 00:08:27.324 28835.840 - 29037.489: 99.8207% ( 11) 00:08:27.324 29037.489 - 29239.138: 99.9616% ( 11) 00:08:27.324 29239.138 - 29440.788: 100.0000% ( 3) 00:08:27.324 00:08:27.324 13:21:23 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:27.324 00:08:27.324 real 0m2.552s 00:08:27.324 user 0m2.168s 00:08:27.324 sys 0m0.261s 00:08:27.324 ************************************ 00:08:27.324 END TEST nvme_perf 00:08:27.324 ************************************ 00:08:27.324 13:21:23 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.324 13:21:23 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:27.324 13:21:23 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:27.324 13:21:23 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:27.324 13:21:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.324 13:21:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.324 ************************************ 00:08:27.324 START TEST nvme_hello_world 00:08:27.324 ************************************ 00:08:27.324 13:21:23 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:27.324 Initializing NVMe Controllers 00:08:27.324 Attached to 0000:00:10.0 00:08:27.324 Namespace ID: 1 size: 6GB 00:08:27.324 Attached to 0000:00:11.0 00:08:27.324 Namespace ID: 1 size: 5GB 00:08:27.324 Attached to 0000:00:13.0 00:08:27.324 Namespace ID: 1 size: 1GB 00:08:27.324 Attached to 0000:00:12.0 00:08:27.324 Namespace ID: 1 size: 4GB 00:08:27.324 Namespace ID: 2 size: 4GB 00:08:27.324 Namespace ID: 3 size: 4GB 00:08:27.324 Initialization complete. 00:08:27.324 INFO: using host memory buffer for IO 00:08:27.324 Hello world! 00:08:27.324 INFO: using host memory buffer for IO 00:08:27.324 Hello world! 00:08:27.324 INFO: using host memory buffer for IO 00:08:27.324 Hello world! 00:08:27.324 INFO: using host memory buffer for IO 00:08:27.324 Hello world! 00:08:27.324 INFO: using host memory buffer for IO 00:08:27.324 Hello world! 00:08:27.324 INFO: using host memory buffer for IO 00:08:27.324 Hello world! 00:08:27.324 00:08:27.324 real 0m0.220s 00:08:27.324 user 0m0.084s 00:08:27.324 sys 0m0.089s 00:08:27.324 13:21:23 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.324 ************************************ 00:08:27.324 END TEST nvme_hello_world 00:08:27.324 ************************************ 00:08:27.324 13:21:23 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:27.586 13:21:23 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:27.586 13:21:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.586 13:21:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.586 13:21:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.586 ************************************ 00:08:27.586 START TEST nvme_sgl 00:08:27.586 ************************************ 00:08:27.586 13:21:23 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:27.586 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:27.586 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:27.586 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:27.586 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:27.586 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:27.878 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:27.878 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:27.878 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:27.878 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:27.878 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:27.878 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:27.878 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:27.878 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:27.878 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:27.878 NVMe Readv/Writev Request test 00:08:27.878 Attached to 0000:00:10.0 00:08:27.878 Attached to 0000:00:11.0 00:08:27.878 Attached to 0000:00:13.0 00:08:27.878 Attached to 0000:00:12.0 00:08:27.878 0000:00:10.0: build_io_request_2 test passed 00:08:27.879 0000:00:10.0: build_io_request_4 test passed 00:08:27.879 0000:00:10.0: build_io_request_5 test passed 00:08:27.879 0000:00:10.0: build_io_request_6 test passed 00:08:27.879 0000:00:10.0: build_io_request_7 test passed 00:08:27.879 0000:00:10.0: build_io_request_10 test passed 00:08:27.879 0000:00:11.0: build_io_request_2 test passed 00:08:27.879 0000:00:11.0: build_io_request_4 test passed 00:08:27.879 0000:00:11.0: build_io_request_5 test passed 00:08:27.879 0000:00:11.0: build_io_request_6 test passed 00:08:27.879 0000:00:11.0: build_io_request_7 test passed 00:08:27.879 0000:00:11.0: build_io_request_10 test passed 00:08:27.879 Cleaning up... 00:08:27.879 00:08:27.879 real 0m0.300s 00:08:27.879 user 0m0.136s 00:08:27.879 sys 0m0.107s 00:08:27.879 ************************************ 00:08:27.879 END TEST nvme_sgl 00:08:27.879 ************************************ 00:08:27.879 13:21:23 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.879 13:21:23 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:27.879 13:21:23 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:27.879 13:21:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.879 13:21:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.879 13:21:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.879 ************************************ 00:08:27.879 START TEST nvme_e2edp 00:08:27.879 ************************************ 00:08:27.879 13:21:23 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:28.160 NVMe Write/Read with End-to-End data protection test 00:08:28.160 Attached to 0000:00:10.0 00:08:28.160 Attached to 0000:00:11.0 00:08:28.160 Attached to 0000:00:13.0 00:08:28.160 Attached to 0000:00:12.0 00:08:28.160 Cleaning up... 00:08:28.160 ************************************ 00:08:28.160 END TEST nvme_e2edp 00:08:28.160 ************************************ 00:08:28.160 00:08:28.160 real 0m0.219s 00:08:28.160 user 0m0.070s 00:08:28.160 sys 0m0.102s 00:08:28.160 13:21:24 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.160 13:21:24 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:28.160 13:21:24 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:28.160 13:21:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.160 13:21:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.160 13:21:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.160 ************************************ 00:08:28.160 START TEST nvme_reserve 00:08:28.160 ************************************ 00:08:28.160 13:21:24 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 16, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 17, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 19, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 18, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 Reservation test passed 00:08:28.421 ************************************ 00:08:28.421 END TEST nvme_reserve 00:08:28.421 ************************************ 00:08:28.421 00:08:28.421 real 0m0.219s 00:08:28.421 user 0m0.074s 00:08:28.421 sys 0m0.094s 00:08:28.421 13:21:24 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.421 13:21:24 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:28.421 13:21:24 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:28.421 13:21:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.421 13:21:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.421 13:21:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.421 ************************************ 00:08:28.421 START TEST nvme_err_injection 00:08:28.421 ************************************ 00:08:28.421 13:21:24 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:28.682 NVMe Error Injection test 00:08:28.682 Attached to 0000:00:10.0 00:08:28.682 Attached to 0000:00:11.0 00:08:28.682 Attached to 0000:00:13.0 00:08:28.682 Attached to 0000:00:12.0 00:08:28.682 0000:00:10.0: get features failed as expected 00:08:28.682 0000:00:11.0: get features failed as expected 00:08:28.682 0000:00:13.0: get features failed as expected 00:08:28.682 0000:00:12.0: get features failed as expected 00:08:28.682 0000:00:10.0: get features successfully as expected 00:08:28.682 0000:00:11.0: get features successfully as expected 00:08:28.682 0000:00:13.0: get features successfully as expected 00:08:28.682 0000:00:12.0: get features successfully as expected 00:08:28.682 0000:00:13.0: read failed as expected 00:08:28.682 0000:00:12.0: read failed as expected 00:08:28.682 0000:00:11.0: read failed as expected 00:08:28.682 0000:00:10.0: read failed as expected 00:08:28.682 0000:00:10.0: read successfully as expected 00:08:28.682 0000:00:11.0: read successfully as expected 00:08:28.682 0000:00:12.0: read successfully as expected 00:08:28.682 0000:00:13.0: read successfully as expected 00:08:28.682 Cleaning up... 00:08:28.682 00:08:28.682 real 0m0.219s 00:08:28.682 user 0m0.077s 00:08:28.682 sys 0m0.095s 00:08:28.682 ************************************ 00:08:28.682 END TEST nvme_err_injection 00:08:28.682 ************************************ 00:08:28.682 13:21:24 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.682 13:21:24 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:28.682 13:21:24 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:28.682 13:21:24 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:28.682 13:21:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.682 13:21:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.682 ************************************ 00:08:28.682 START TEST nvme_overhead 00:08:28.682 ************************************ 00:08:28.682 13:21:24 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:30.069 Initializing NVMe Controllers 00:08:30.069 Attached to 0000:00:10.0 00:08:30.069 Attached to 0000:00:11.0 00:08:30.069 Attached to 0000:00:13.0 00:08:30.069 Attached to 0000:00:12.0 00:08:30.069 Initialization complete. Launching workers. 00:08:30.069 submit (in ns) avg, min, max = 15417.2, 11777.7, 202020.0 00:08:30.069 complete (in ns) avg, min, max = 9625.8, 8131.5, 434789.2 00:08:30.069 00:08:30.069 Submit histogram 00:08:30.069 ================ 00:08:30.069 Range in us Cumulative Count 00:08:30.069 11.766 - 11.815: 0.0382% ( 1) 00:08:30.069 12.062 - 12.111: 0.0763% ( 1) 00:08:30.069 12.308 - 12.357: 0.1145% ( 1) 00:08:30.069 12.406 - 12.455: 0.1527% ( 1) 00:08:30.069 12.997 - 13.095: 0.2290% ( 2) 00:08:30.069 13.095 - 13.194: 0.3053% ( 2) 00:08:30.069 13.194 - 13.292: 0.4198% ( 3) 00:08:30.069 13.292 - 13.391: 0.8015% ( 10) 00:08:30.069 13.391 - 13.489: 1.6412% ( 22) 00:08:30.069 13.489 - 13.588: 3.7786% ( 56) 00:08:30.069 13.588 - 13.686: 6.2595% ( 65) 00:08:30.069 13.686 - 13.785: 10.7252% ( 117) 00:08:30.069 13.785 - 13.883: 15.7252% ( 131) 00:08:30.069 13.883 - 13.982: 22.4427% ( 176) 00:08:30.069 13.982 - 14.080: 31.3740% ( 234) 00:08:30.069 14.080 - 14.178: 39.6565% ( 217) 00:08:30.069 14.178 - 14.277: 47.9389% ( 217) 00:08:30.069 14.277 - 14.375: 55.0763% ( 187) 00:08:30.069 14.375 - 14.474: 62.2901% ( 189) 00:08:30.069 14.474 - 14.572: 67.6336% ( 140) 00:08:30.069 14.572 - 14.671: 72.4427% ( 126) 00:08:30.069 14.671 - 14.769: 75.8397% ( 89) 00:08:30.069 14.769 - 14.868: 77.2519% ( 37) 00:08:30.069 14.868 - 14.966: 79.1985% ( 51) 00:08:30.069 14.966 - 15.065: 81.0687% ( 49) 00:08:30.069 15.065 - 15.163: 82.0229% ( 25) 00:08:30.069 15.163 - 15.262: 83.0153% ( 26) 00:08:30.069 15.262 - 15.360: 83.9313% ( 24) 00:08:30.069 15.360 - 15.458: 84.6947% ( 20) 00:08:30.069 15.458 - 15.557: 85.1527% ( 12) 00:08:30.069 15.557 - 15.655: 85.8397% ( 18) 00:08:30.069 15.655 - 15.754: 86.3359% ( 13) 00:08:30.069 15.754 - 15.852: 86.5267% ( 5) 00:08:30.069 15.852 - 15.951: 86.7939% ( 7) 00:08:30.069 15.951 - 16.049: 86.9084% ( 3) 00:08:30.069 16.049 - 16.148: 87.0992% ( 5) 00:08:30.069 16.148 - 16.246: 87.2137% ( 3) 00:08:30.069 16.246 - 16.345: 87.2519% ( 1) 00:08:30.069 16.345 - 16.443: 87.2901% ( 1) 00:08:30.069 16.443 - 16.542: 87.3282% ( 1) 00:08:30.069 16.542 - 16.640: 87.4809% ( 4) 00:08:30.069 16.640 - 16.738: 87.5573% ( 2) 00:08:30.069 16.738 - 16.837: 87.7099% ( 4) 00:08:30.069 16.837 - 16.935: 87.7863% ( 2) 00:08:30.069 16.935 - 17.034: 87.8244% ( 1) 00:08:30.069 17.132 - 17.231: 87.9008% ( 2) 00:08:30.069 17.231 - 17.329: 87.9389% ( 1) 00:08:30.069 17.329 - 17.428: 88.0534% ( 3) 00:08:30.069 17.526 - 17.625: 88.0916% ( 1) 00:08:30.069 17.625 - 17.723: 88.1298% ( 1) 00:08:30.069 17.723 - 17.822: 88.2061% ( 2) 00:08:30.069 17.822 - 17.920: 88.3206% ( 3) 00:08:30.069 17.920 - 18.018: 88.3969% ( 2) 00:08:30.069 18.018 - 18.117: 88.5115% ( 3) 00:08:30.069 18.117 - 18.215: 88.6641% ( 4) 00:08:30.069 18.215 - 18.314: 88.8168% ( 4) 00:08:30.069 18.314 - 18.412: 88.9313% ( 3) 00:08:30.069 18.412 - 18.511: 89.0458% ( 3) 00:08:30.069 18.511 - 18.609: 89.3511% ( 8) 00:08:30.069 18.609 - 18.708: 89.5038% ( 4) 00:08:30.069 18.708 - 18.806: 89.6947% ( 5) 00:08:30.069 18.806 - 18.905: 89.9237% ( 6) 00:08:30.069 18.905 - 19.003: 90.2290% ( 8) 00:08:30.069 19.003 - 19.102: 90.4580% ( 6) 00:08:30.069 19.102 - 19.200: 90.5344% ( 2) 00:08:30.069 19.200 - 19.298: 90.7634% ( 6) 00:08:30.069 19.298 - 19.397: 90.8779% ( 3) 00:08:30.069 19.397 - 19.495: 91.2595% ( 10) 00:08:30.069 19.495 - 19.594: 91.5649% ( 8) 00:08:30.069 19.594 - 19.692: 91.7557% ( 5) 00:08:30.069 19.692 - 19.791: 91.9466% ( 5) 00:08:30.069 19.791 - 19.889: 92.1374% ( 5) 00:08:30.069 19.889 - 19.988: 92.2901% ( 4) 00:08:30.069 19.988 - 20.086: 92.3664% ( 2) 00:08:30.069 20.086 - 20.185: 92.4809% ( 3) 00:08:30.069 20.185 - 20.283: 92.6336% ( 4) 00:08:30.069 20.283 - 20.382: 92.9008% ( 7) 00:08:30.069 20.382 - 20.480: 93.1679% ( 7) 00:08:30.069 20.480 - 20.578: 93.2443% ( 2) 00:08:30.069 20.578 - 20.677: 93.4733% ( 6) 00:08:30.069 20.677 - 20.775: 93.5496% ( 2) 00:08:30.069 20.775 - 20.874: 93.7405% ( 5) 00:08:30.069 20.874 - 20.972: 94.0458% ( 8) 00:08:30.069 20.972 - 21.071: 94.2748% ( 6) 00:08:30.069 21.071 - 21.169: 94.6565% ( 10) 00:08:30.069 21.169 - 21.268: 94.9618% ( 8) 00:08:30.069 21.268 - 21.366: 95.1908% ( 6) 00:08:30.069 21.366 - 21.465: 95.3435% ( 4) 00:08:30.069 21.465 - 21.563: 95.4962% ( 4) 00:08:30.069 21.563 - 21.662: 95.6489% ( 4) 00:08:30.069 21.662 - 21.760: 95.7252% ( 2) 00:08:30.069 21.760 - 21.858: 95.7634% ( 1) 00:08:30.069 21.858 - 21.957: 95.8779% ( 3) 00:08:30.069 21.957 - 22.055: 96.1450% ( 7) 00:08:30.069 22.055 - 22.154: 96.2977% ( 4) 00:08:30.069 22.154 - 22.252: 96.4122% ( 3) 00:08:30.069 22.252 - 22.351: 96.4885% ( 2) 00:08:30.069 22.351 - 22.449: 96.7939% ( 8) 00:08:30.069 22.449 - 22.548: 96.9847% ( 5) 00:08:30.069 22.548 - 22.646: 97.1756% ( 5) 00:08:30.069 22.646 - 22.745: 97.2519% ( 2) 00:08:30.069 22.745 - 22.843: 97.4046% ( 4) 00:08:30.069 22.942 - 23.040: 97.5191% ( 3) 00:08:30.069 23.040 - 23.138: 97.5954% ( 2) 00:08:30.069 23.138 - 23.237: 97.6336% ( 1) 00:08:30.069 23.237 - 23.335: 97.7481% ( 3) 00:08:30.069 23.335 - 23.434: 97.9008% ( 4) 00:08:30.069 23.434 - 23.532: 97.9389% ( 1) 00:08:30.069 23.532 - 23.631: 98.0153% ( 2) 00:08:30.069 23.631 - 23.729: 98.0534% ( 1) 00:08:30.069 23.729 - 23.828: 98.1298% ( 2) 00:08:30.069 23.926 - 24.025: 98.2061% ( 2) 00:08:30.069 24.123 - 24.222: 98.2824% ( 2) 00:08:30.069 24.418 - 24.517: 98.3206% ( 1) 00:08:30.069 24.517 - 24.615: 98.3969% ( 2) 00:08:30.069 24.714 - 24.812: 98.4733% ( 2) 00:08:30.069 24.911 - 25.009: 98.6260% ( 4) 00:08:30.069 25.009 - 25.108: 98.7405% ( 3) 00:08:30.069 25.206 - 25.403: 98.7786% ( 1) 00:08:30.069 25.600 - 25.797: 98.8168% ( 1) 00:08:30.069 26.388 - 26.585: 98.8550% ( 1) 00:08:30.069 26.585 - 26.782: 98.8931% ( 1) 00:08:30.069 26.782 - 26.978: 98.9313% ( 1) 00:08:30.069 27.175 - 27.372: 99.0076% ( 2) 00:08:30.069 27.766 - 27.963: 99.0458% ( 1) 00:08:30.069 27.963 - 28.160: 99.0840% ( 1) 00:08:30.069 28.554 - 28.751: 99.1221% ( 1) 00:08:30.069 29.145 - 29.342: 99.1985% ( 2) 00:08:30.069 29.342 - 29.538: 99.2748% ( 2) 00:08:30.069 30.129 - 30.326: 99.3511% ( 2) 00:08:30.069 30.326 - 30.523: 99.3893% ( 1) 00:08:30.069 30.523 - 30.720: 99.4275% ( 1) 00:08:30.069 30.720 - 30.917: 99.4656% ( 1) 00:08:30.069 32.492 - 32.689: 99.5038% ( 1) 00:08:30.069 32.689 - 32.886: 99.5420% ( 1) 00:08:30.069 34.068 - 34.265: 99.5802% ( 1) 00:08:30.069 34.658 - 34.855: 99.6183% ( 1) 00:08:30.069 35.249 - 35.446: 99.6565% ( 1) 00:08:30.069 37.809 - 38.006: 99.6947% ( 1) 00:08:30.069 44.505 - 44.702: 99.7328% ( 1) 00:08:30.070 57.502 - 57.895: 99.7710% ( 1) 00:08:30.070 68.923 - 69.317: 99.8092% ( 1) 00:08:30.070 84.283 - 84.677: 99.8473% ( 1) 00:08:30.070 94.523 - 94.917: 99.8855% ( 1) 00:08:30.070 105.551 - 106.338: 99.9237% ( 1) 00:08:30.070 166.203 - 166.991: 99.9618% ( 1) 00:08:30.070 201.649 - 203.225: 100.0000% ( 1) 00:08:30.070 00:08:30.070 Complete histogram 00:08:30.070 ================== 00:08:30.070 Range in us Cumulative Count 00:08:30.070 8.123 - 8.172: 0.1908% ( 5) 00:08:30.070 8.172 - 8.222: 0.3053% ( 3) 00:08:30.070 8.222 - 8.271: 1.3740% ( 28) 00:08:30.070 8.271 - 8.320: 3.2824% ( 50) 00:08:30.070 8.320 - 8.369: 6.8702% ( 94) 00:08:30.070 8.369 - 8.418: 11.9084% ( 132) 00:08:30.070 8.418 - 8.468: 19.2366% ( 192) 00:08:30.070 8.468 - 8.517: 27.5954% ( 219) 00:08:30.070 8.517 - 8.566: 36.2214% ( 226) 00:08:30.070 8.566 - 8.615: 44.2366% ( 210) 00:08:30.070 8.615 - 8.665: 50.9924% ( 177) 00:08:30.070 8.665 - 8.714: 57.2137% ( 163) 00:08:30.070 8.714 - 8.763: 61.5267% ( 113) 00:08:30.070 8.763 - 8.812: 66.2214% ( 123) 00:08:30.070 8.812 - 8.862: 70.1527% ( 103) 00:08:30.070 8.862 - 8.911: 72.7481% ( 68) 00:08:30.070 8.911 - 8.960: 74.9618% ( 58) 00:08:30.070 8.960 - 9.009: 77.0229% ( 54) 00:08:30.070 9.009 - 9.058: 78.7023% ( 44) 00:08:30.070 9.058 - 9.108: 79.9237% ( 32) 00:08:30.070 9.108 - 9.157: 81.2595% ( 35) 00:08:30.070 9.157 - 9.206: 82.6718% ( 37) 00:08:30.070 9.206 - 9.255: 83.8168% ( 30) 00:08:30.070 9.255 - 9.305: 84.6565% ( 22) 00:08:30.070 9.305 - 9.354: 85.8397% ( 31) 00:08:30.070 9.354 - 9.403: 86.8321% ( 26) 00:08:30.070 9.403 - 9.452: 87.3664% ( 14) 00:08:30.070 9.452 - 9.502: 88.0153% ( 17) 00:08:30.070 9.502 - 9.551: 88.8550% ( 22) 00:08:30.070 9.551 - 9.600: 89.6565% ( 21) 00:08:30.070 9.600 - 9.649: 90.1527% ( 13) 00:08:30.070 9.649 - 9.698: 90.5344% ( 10) 00:08:30.070 9.698 - 9.748: 91.0305% ( 13) 00:08:30.070 9.748 - 9.797: 91.2595% ( 6) 00:08:30.070 9.797 - 9.846: 91.5649% ( 8) 00:08:30.070 9.846 - 9.895: 91.8702% ( 8) 00:08:30.070 9.895 - 9.945: 92.1374% ( 7) 00:08:30.070 9.945 - 9.994: 92.4427% ( 8) 00:08:30.070 9.994 - 10.043: 92.6336% ( 5) 00:08:30.070 10.043 - 10.092: 92.7099% ( 2) 00:08:30.070 10.092 - 10.142: 92.9008% ( 5) 00:08:30.070 10.142 - 10.191: 93.0534% ( 4) 00:08:30.070 10.191 - 10.240: 93.2061% ( 4) 00:08:30.070 10.289 - 10.338: 93.3206% ( 3) 00:08:30.070 10.388 - 10.437: 93.3588% ( 1) 00:08:30.070 10.437 - 10.486: 93.4351% ( 2) 00:08:30.070 10.535 - 10.585: 93.5115% ( 2) 00:08:30.070 10.585 - 10.634: 93.5496% ( 1) 00:08:30.070 10.634 - 10.683: 93.6260% ( 2) 00:08:30.070 10.683 - 10.732: 93.7023% ( 2) 00:08:30.070 10.732 - 10.782: 93.7786% ( 2) 00:08:30.070 10.782 - 10.831: 93.8931% ( 3) 00:08:30.070 10.831 - 10.880: 93.9313% ( 1) 00:08:30.070 10.880 - 10.929: 94.0840% ( 4) 00:08:30.070 10.929 - 10.978: 94.1221% ( 1) 00:08:30.070 10.978 - 11.028: 94.2366% ( 3) 00:08:30.070 11.028 - 11.077: 94.2748% ( 1) 00:08:30.070 11.077 - 11.126: 94.3893% ( 3) 00:08:30.070 11.126 - 11.175: 94.4275% ( 1) 00:08:30.070 11.175 - 11.225: 94.5420% ( 3) 00:08:30.070 11.225 - 11.274: 94.6183% ( 2) 00:08:30.070 11.274 - 11.323: 94.6947% ( 2) 00:08:30.070 11.323 - 11.372: 94.7710% ( 2) 00:08:30.070 11.372 - 11.422: 94.8092% ( 1) 00:08:30.070 11.471 - 11.520: 94.9237% ( 3) 00:08:30.070 11.520 - 11.569: 94.9618% ( 1) 00:08:30.070 11.569 - 11.618: 95.0382% ( 2) 00:08:30.070 11.668 - 11.717: 95.0763% ( 1) 00:08:30.070 11.766 - 11.815: 95.1908% ( 3) 00:08:30.070 11.815 - 11.865: 95.3435% ( 4) 00:08:30.070 11.865 - 11.914: 95.3817% ( 1) 00:08:30.070 12.062 - 12.111: 95.4198% ( 1) 00:08:30.070 12.160 - 12.209: 95.4580% ( 1) 00:08:30.070 12.209 - 12.258: 95.6107% ( 4) 00:08:30.070 12.258 - 12.308: 95.6489% ( 1) 00:08:30.070 12.308 - 12.357: 95.6870% ( 1) 00:08:30.070 12.357 - 12.406: 95.7634% ( 2) 00:08:30.070 12.406 - 12.455: 95.8779% ( 3) 00:08:30.070 12.455 - 12.505: 95.9924% ( 3) 00:08:30.070 12.554 - 12.603: 96.0305% ( 1) 00:08:30.070 12.603 - 12.702: 96.0687% ( 1) 00:08:30.070 12.702 - 12.800: 96.1069% ( 1) 00:08:30.070 12.800 - 12.898: 96.2595% ( 4) 00:08:30.070 12.898 - 12.997: 96.3359% ( 2) 00:08:30.070 12.997 - 13.095: 96.4122% ( 2) 00:08:30.070 13.194 - 13.292: 96.4504% ( 1) 00:08:30.070 13.292 - 13.391: 96.4885% ( 1) 00:08:30.070 13.588 - 13.686: 96.5649% ( 2) 00:08:30.070 13.785 - 13.883: 96.6031% ( 1) 00:08:30.070 13.883 - 13.982: 96.6794% ( 2) 00:08:30.070 14.178 - 14.277: 96.7557% ( 2) 00:08:30.070 14.277 - 14.375: 96.7939% ( 1) 00:08:30.070 14.375 - 14.474: 96.8321% ( 1) 00:08:30.070 14.474 - 14.572: 96.8702% ( 1) 00:08:30.070 14.572 - 14.671: 96.9084% ( 1) 00:08:30.070 14.671 - 14.769: 96.9466% ( 1) 00:08:30.070 14.769 - 14.868: 96.9847% ( 1) 00:08:30.070 15.065 - 15.163: 97.0611% ( 2) 00:08:30.070 15.163 - 15.262: 97.1374% ( 2) 00:08:30.070 15.262 - 15.360: 97.1756% ( 1) 00:08:30.070 15.360 - 15.458: 97.2137% ( 1) 00:08:30.070 15.458 - 15.557: 97.2519% ( 1) 00:08:30.070 15.557 - 15.655: 97.4427% ( 5) 00:08:30.070 15.655 - 15.754: 97.5954% ( 4) 00:08:30.070 15.754 - 15.852: 97.7481% ( 4) 00:08:30.070 15.852 - 15.951: 97.7863% ( 1) 00:08:30.070 15.951 - 16.049: 97.8244% ( 1) 00:08:30.070 16.049 - 16.148: 97.8626% ( 1) 00:08:30.070 16.246 - 16.345: 97.9008% ( 1) 00:08:30.070 16.345 - 16.443: 97.9771% ( 2) 00:08:30.070 16.443 - 16.542: 98.0534% ( 2) 00:08:30.070 16.542 - 16.640: 98.1298% ( 2) 00:08:30.070 16.640 - 16.738: 98.1679% ( 1) 00:08:30.070 16.738 - 16.837: 98.2061% ( 1) 00:08:30.070 16.837 - 16.935: 98.2824% ( 2) 00:08:30.070 16.935 - 17.034: 98.3588% ( 2) 00:08:30.070 17.526 - 17.625: 98.3969% ( 1) 00:08:30.070 18.018 - 18.117: 98.4351% ( 1) 00:08:30.070 18.117 - 18.215: 98.4733% ( 1) 00:08:30.070 18.215 - 18.314: 98.5878% ( 3) 00:08:30.070 18.511 - 18.609: 98.6260% ( 1) 00:08:30.070 18.905 - 19.003: 98.6641% ( 1) 00:08:30.070 19.003 - 19.102: 98.7023% ( 1) 00:08:30.070 19.102 - 19.200: 98.7405% ( 1) 00:08:30.070 19.594 - 19.692: 98.7786% ( 1) 00:08:30.070 19.791 - 19.889: 98.8168% ( 1) 00:08:30.070 20.185 - 20.283: 98.8550% ( 1) 00:08:30.070 20.775 - 20.874: 98.8931% ( 1) 00:08:30.070 21.169 - 21.268: 98.9313% ( 1) 00:08:30.070 21.268 - 21.366: 99.0076% ( 2) 00:08:30.070 21.662 - 21.760: 99.0458% ( 1) 00:08:30.070 22.548 - 22.646: 99.0840% ( 1) 00:08:30.070 24.025 - 24.123: 99.1221% ( 1) 00:08:30.070 24.123 - 24.222: 99.1603% ( 1) 00:08:30.070 24.615 - 24.714: 99.1985% ( 1) 00:08:30.070 24.812 - 24.911: 99.2366% ( 1) 00:08:30.070 25.403 - 25.600: 99.2748% ( 1) 00:08:30.070 25.994 - 26.191: 99.3130% ( 1) 00:08:30.070 26.191 - 26.388: 99.3511% ( 1) 00:08:30.070 27.963 - 28.160: 99.3893% ( 1) 00:08:30.070 28.751 - 28.948: 99.4275% ( 1) 00:08:30.070 30.129 - 30.326: 99.4656% ( 1) 00:08:30.070 31.902 - 32.098: 99.5038% ( 1) 00:08:30.070 33.674 - 33.871: 99.5420% ( 1) 00:08:30.070 34.855 - 35.052: 99.5802% ( 1) 00:08:30.071 37.022 - 37.218: 99.6183% ( 1) 00:08:30.071 37.612 - 37.809: 99.6565% ( 1) 00:08:30.071 39.975 - 40.172: 99.6947% ( 1) 00:08:30.071 46.080 - 46.277: 99.7328% ( 1) 00:08:30.071 50.806 - 51.200: 99.7710% ( 1) 00:08:30.071 68.923 - 69.317: 99.8092% ( 1) 00:08:30.071 82.708 - 83.102: 99.8473% ( 1) 00:08:30.071 119.729 - 120.517: 99.8855% ( 1) 00:08:30.071 137.846 - 138.634: 99.9237% ( 1) 00:08:30.071 335.557 - 337.132: 99.9618% ( 1) 00:08:30.071 431.655 - 434.806: 100.0000% ( 1) 00:08:30.071 00:08:30.071 00:08:30.071 real 0m1.208s 00:08:30.071 user 0m1.058s 00:08:30.071 sys 0m0.104s 00:08:30.071 13:21:25 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.071 ************************************ 00:08:30.071 13:21:25 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:30.071 END TEST nvme_overhead 00:08:30.071 ************************************ 00:08:30.071 13:21:25 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:30.071 13:21:25 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:30.071 13:21:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.071 13:21:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.071 ************************************ 00:08:30.071 START TEST nvme_arbitration 00:08:30.071 ************************************ 00:08:30.071 13:21:26 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:33.369 Initializing NVMe Controllers 00:08:33.369 Attached to 0000:00:10.0 00:08:33.369 Attached to 0000:00:11.0 00:08:33.369 Attached to 0000:00:13.0 00:08:33.369 Attached to 0000:00:12.0 00:08:33.369 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:33.369 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:33.369 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:33.369 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:33.369 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:33.369 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:33.369 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:33.369 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:33.369 Initialization complete. Launching workers. 00:08:33.369 Starting thread on core 1 with urgent priority queue 00:08:33.369 Starting thread on core 2 with urgent priority queue 00:08:33.369 Starting thread on core 3 with urgent priority queue 00:08:33.369 Starting thread on core 0 with urgent priority queue 00:08:33.369 QEMU NVMe Ctrl (12340 ) core 0: 3776.00 IO/s 26.48 secs/100000 ios 00:08:33.369 QEMU NVMe Ctrl (12342 ) core 0: 3776.00 IO/s 26.48 secs/100000 ios 00:08:33.369 QEMU NVMe Ctrl (12341 ) core 1: 3840.00 IO/s 26.04 secs/100000 ios 00:08:33.369 QEMU NVMe Ctrl (12342 ) core 1: 3840.00 IO/s 26.04 secs/100000 ios 00:08:33.369 QEMU NVMe Ctrl (12343 ) core 2: 3520.00 IO/s 28.41 secs/100000 ios 00:08:33.369 QEMU NVMe Ctrl (12342 ) core 3: 3754.67 IO/s 26.63 secs/100000 ios 00:08:33.369 ======================================================== 00:08:33.369 00:08:33.369 00:08:33.369 real 0m3.243s 00:08:33.369 user 0m8.985s 00:08:33.369 sys 0m0.130s 00:08:33.369 13:21:29 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.369 ************************************ 00:08:33.369 END TEST nvme_arbitration 00:08:33.369 ************************************ 00:08:33.369 13:21:29 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:33.369 13:21:29 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:33.369 13:21:29 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.369 13:21:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.369 13:21:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.369 ************************************ 00:08:33.369 START TEST nvme_single_aen 00:08:33.369 ************************************ 00:08:33.369 13:21:29 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:33.695 Asynchronous Event Request test 00:08:33.695 Attached to 0000:00:10.0 00:08:33.695 Attached to 0000:00:11.0 00:08:33.695 Attached to 0000:00:13.0 00:08:33.695 Attached to 0000:00:12.0 00:08:33.695 Reset controller to setup AER completions for this process 00:08:33.695 Registering asynchronous event callbacks... 00:08:33.695 Getting orig temperature thresholds of all controllers 00:08:33.695 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.695 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.695 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.695 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.695 Setting all controllers temperature threshold low to trigger AER 00:08:33.695 Waiting for all controllers temperature threshold to be set lower 00:08:33.695 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.695 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:33.695 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.695 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:33.695 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.695 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:33.695 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.695 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:33.695 Waiting for all controllers to trigger AER and reset threshold 00:08:33.695 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.695 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.695 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.696 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.696 Cleaning up... 00:08:33.696 00:08:33.696 real 0m0.215s 00:08:33.696 user 0m0.073s 00:08:33.696 sys 0m0.095s 00:08:33.696 ************************************ 00:08:33.696 END TEST nvme_single_aen 00:08:33.696 ************************************ 00:08:33.696 13:21:29 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.696 13:21:29 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 13:21:29 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:33.696 13:21:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.696 13:21:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.696 13:21:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 ************************************ 00:08:33.696 START TEST nvme_doorbell_aers 00:08:33.696 ************************************ 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:33.696 13:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:33.971 [2024-11-18 13:21:29.871984] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:08:43.981 Executing: test_write_invalid_db 00:08:43.981 Waiting for AER completion... 00:08:43.981 Failure: test_write_invalid_db 00:08:43.981 00:08:43.981 Executing: test_invalid_db_write_overflow_sq 00:08:43.981 Waiting for AER completion... 00:08:43.981 Failure: test_invalid_db_write_overflow_sq 00:08:43.981 00:08:43.981 Executing: test_invalid_db_write_overflow_cq 00:08:43.981 Waiting for AER completion... 00:08:43.981 Failure: test_invalid_db_write_overflow_cq 00:08:43.981 00:08:43.981 13:21:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:43.981 13:21:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:43.981 [2024-11-18 13:21:39.893226] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:08:53.997 Executing: test_write_invalid_db 00:08:53.997 Waiting for AER completion... 00:08:53.997 Failure: test_write_invalid_db 00:08:53.997 00:08:53.997 Executing: test_invalid_db_write_overflow_sq 00:08:53.997 Waiting for AER completion... 00:08:53.997 Failure: test_invalid_db_write_overflow_sq 00:08:53.997 00:08:53.997 Executing: test_invalid_db_write_overflow_cq 00:08:53.997 Waiting for AER completion... 00:08:53.998 Failure: test_invalid_db_write_overflow_cq 00:08:53.998 00:08:53.998 13:21:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:53.998 13:21:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:53.998 [2024-11-18 13:21:49.945551] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:03.993 Executing: test_write_invalid_db 00:09:03.993 Waiting for AER completion... 00:09:03.993 Failure: test_write_invalid_db 00:09:03.993 00:09:03.993 Executing: test_invalid_db_write_overflow_sq 00:09:03.993 Waiting for AER completion... 00:09:03.993 Failure: test_invalid_db_write_overflow_sq 00:09:03.993 00:09:03.993 Executing: test_invalid_db_write_overflow_cq 00:09:03.993 Waiting for AER completion... 00:09:03.993 Failure: test_invalid_db_write_overflow_cq 00:09:03.993 00:09:03.993 13:21:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:03.993 13:21:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:03.993 [2024-11-18 13:21:59.954440] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.952 Executing: test_write_invalid_db 00:09:13.952 Waiting for AER completion... 00:09:13.952 Failure: test_write_invalid_db 00:09:13.952 00:09:13.952 Executing: test_invalid_db_write_overflow_sq 00:09:13.952 Waiting for AER completion... 00:09:13.952 Failure: test_invalid_db_write_overflow_sq 00:09:13.952 00:09:13.952 Executing: test_invalid_db_write_overflow_cq 00:09:13.952 Waiting for AER completion... 00:09:13.952 Failure: test_invalid_db_write_overflow_cq 00:09:13.952 00:09:13.952 00:09:13.952 real 0m40.196s 00:09:13.952 user 0m34.148s 00:09:13.952 sys 0m5.669s 00:09:13.952 13:22:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.952 13:22:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:13.952 ************************************ 00:09:13.952 END TEST nvme_doorbell_aers 00:09:13.952 ************************************ 00:09:13.952 13:22:09 nvme -- nvme/nvme.sh@97 -- # uname 00:09:13.952 13:22:09 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:13.952 13:22:09 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:13.952 13:22:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:13.952 13:22:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.952 13:22:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:13.952 ************************************ 00:09:13.952 START TEST nvme_multi_aen 00:09:13.952 ************************************ 00:09:13.952 13:22:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:13.952 [2024-11-18 13:22:09.992079] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.952 [2024-11-18 13:22:09.992289] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.992395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.993593] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.993614] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.993622] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.994508] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.994530] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.994538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.995525] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.995610] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 [2024-11-18 13:22:09.995659] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75153) is not found. Dropping the request. 00:09:13.953 Child process pid: 75679 00:09:14.211 [Child] Asynchronous Event Request test 00:09:14.211 [Child] Attached to 0000:00:10.0 00:09:14.211 [Child] Attached to 0000:00:11.0 00:09:14.211 [Child] Attached to 0000:00:13.0 00:09:14.211 [Child] Attached to 0000:00:12.0 00:09:14.211 [Child] Registering asynchronous event callbacks... 00:09:14.211 [Child] Getting orig temperature thresholds of all controllers 00:09:14.211 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:14.211 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 [Child] Cleaning up... 00:09:14.211 Asynchronous Event Request test 00:09:14.211 Attached to 0000:00:10.0 00:09:14.211 Attached to 0000:00:11.0 00:09:14.211 Attached to 0000:00:13.0 00:09:14.211 Attached to 0000:00:12.0 00:09:14.211 Reset controller to setup AER completions for this process 00:09:14.211 Registering asynchronous event callbacks... 00:09:14.211 Getting orig temperature thresholds of all controllers 00:09:14.211 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.211 Setting all controllers temperature threshold low to trigger AER 00:09:14.211 Waiting for all controllers temperature threshold to be set lower 00:09:14.211 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:14.211 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:14.211 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:14.211 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.211 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:14.211 Waiting for all controllers to trigger AER and reset threshold 00:09:14.211 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.211 Cleaning up... 00:09:14.211 00:09:14.211 real 0m0.378s 00:09:14.211 user 0m0.128s 00:09:14.211 sys 0m0.145s 00:09:14.211 13:22:10 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.211 13:22:10 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:14.211 ************************************ 00:09:14.211 END TEST nvme_multi_aen 00:09:14.211 ************************************ 00:09:14.211 13:22:10 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:14.211 13:22:10 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:14.211 13:22:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.211 13:22:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.211 ************************************ 00:09:14.211 START TEST nvme_startup 00:09:14.211 ************************************ 00:09:14.211 13:22:10 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:14.469 Initializing NVMe Controllers 00:09:14.469 Attached to 0000:00:10.0 00:09:14.469 Attached to 0000:00:11.0 00:09:14.469 Attached to 0000:00:13.0 00:09:14.469 Attached to 0000:00:12.0 00:09:14.469 Initialization complete. 00:09:14.469 Time used:139988.141 (us). 00:09:14.469 00:09:14.469 real 0m0.188s 00:09:14.469 user 0m0.062s 00:09:14.469 sys 0m0.078s 00:09:14.469 13:22:10 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.469 13:22:10 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:14.469 ************************************ 00:09:14.469 END TEST nvme_startup 00:09:14.469 ************************************ 00:09:14.469 13:22:10 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:14.469 13:22:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.469 13:22:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.469 13:22:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.469 ************************************ 00:09:14.469 START TEST nvme_multi_secondary 00:09:14.469 ************************************ 00:09:14.469 13:22:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:14.469 13:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75730 00:09:14.469 13:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:14.469 13:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75731 00:09:14.469 13:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:14.469 13:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:17.810 Initializing NVMe Controllers 00:09:17.810 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:17.810 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:17.810 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:17.810 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:17.810 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:17.810 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:17.810 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:17.810 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:17.810 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:17.810 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:17.810 Initialization complete. Launching workers. 00:09:17.810 ======================================================== 00:09:17.810 Latency(us) 00:09:17.810 Device Information : IOPS MiB/s Average min max 00:09:17.810 PCIE (0000:00:10.0) NSID 1 from core 1: 7966.44 31.12 2007.02 678.03 6744.70 00:09:17.810 PCIE (0000:00:11.0) NSID 1 from core 1: 7966.44 31.12 2008.06 710.05 6599.04 00:09:17.810 PCIE (0000:00:13.0) NSID 1 from core 1: 7966.44 31.12 2008.05 720.89 6371.33 00:09:17.810 PCIE (0000:00:12.0) NSID 1 from core 1: 7966.44 31.12 2008.02 725.29 6818.37 00:09:17.810 PCIE (0000:00:12.0) NSID 2 from core 1: 7966.44 31.12 2008.22 716.44 7203.98 00:09:17.810 PCIE (0000:00:12.0) NSID 3 from core 1: 7966.44 31.12 2008.37 717.44 7195.33 00:09:17.810 ======================================================== 00:09:17.810 Total : 47798.64 186.71 2007.96 678.03 7203.98 00:09:17.810 00:09:17.810 Initializing NVMe Controllers 00:09:17.810 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:17.810 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:17.810 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:17.810 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:17.810 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:17.810 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:17.810 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:17.810 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:17.810 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:17.810 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:17.810 Initialization complete. Launching workers. 00:09:17.810 ======================================================== 00:09:17.810 Latency(us) 00:09:17.810 Device Information : IOPS MiB/s Average min max 00:09:17.810 PCIE (0000:00:10.0) NSID 1 from core 2: 3328.31 13.00 4805.61 1118.46 14236.98 00:09:17.810 PCIE (0000:00:11.0) NSID 1 from core 2: 3328.31 13.00 4806.44 1211.60 16469.47 00:09:17.810 PCIE (0000:00:13.0) NSID 1 from core 2: 3328.31 13.00 4806.04 1151.64 16988.34 00:09:17.810 PCIE (0000:00:12.0) NSID 1 from core 2: 3328.31 13.00 4800.45 1134.24 17760.33 00:09:17.810 PCIE (0000:00:12.0) NSID 2 from core 2: 3328.31 13.00 4800.49 1149.13 13501.18 00:09:17.810 PCIE (0000:00:12.0) NSID 3 from core 2: 3328.31 13.00 4800.09 1041.04 13479.39 00:09:17.810 ======================================================== 00:09:17.810 Total : 19969.85 78.01 4803.19 1041.04 17760.33 00:09:17.810 00:09:17.810 13:22:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75730 00:09:19.709 Initializing NVMe Controllers 00:09:19.709 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.709 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.709 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.709 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.709 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:19.709 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:19.709 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:19.709 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:19.709 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:19.709 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:19.709 Initialization complete. Launching workers. 00:09:19.709 ======================================================== 00:09:19.709 Latency(us) 00:09:19.709 Device Information : IOPS MiB/s Average min max 00:09:19.709 PCIE (0000:00:10.0) NSID 1 from core 0: 11255.66 43.97 1420.28 678.35 5743.04 00:09:19.709 PCIE (0000:00:11.0) NSID 1 from core 0: 11255.66 43.97 1421.11 691.73 6197.89 00:09:19.709 PCIE (0000:00:13.0) NSID 1 from core 0: 11255.66 43.97 1421.09 592.40 6217.59 00:09:19.709 PCIE (0000:00:12.0) NSID 1 from core 0: 11255.66 43.97 1421.07 519.80 6610.36 00:09:19.709 PCIE (0000:00:12.0) NSID 2 from core 0: 11255.66 43.97 1421.05 448.25 6463.76 00:09:19.709 PCIE (0000:00:12.0) NSID 3 from core 0: 11255.66 43.97 1421.02 387.71 6032.13 00:09:19.709 ======================================================== 00:09:19.709 Total : 67533.98 263.80 1420.94 387.71 6610.36 00:09:19.709 00:09:19.709 13:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75731 00:09:19.709 13:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75800 00:09:19.709 13:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:19.709 13:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75801 00:09:19.709 13:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:19.709 13:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:22.991 Initializing NVMe Controllers 00:09:22.991 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.991 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.991 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.991 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.991 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:22.991 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:22.991 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:22.991 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:22.991 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:22.991 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:22.991 Initialization complete. Launching workers. 00:09:22.991 ======================================================== 00:09:22.991 Latency(us) 00:09:22.991 Device Information : IOPS MiB/s Average min max 00:09:22.991 PCIE (0000:00:10.0) NSID 1 from core 1: 7967.47 31.12 2006.81 680.75 6380.79 00:09:22.991 PCIE (0000:00:11.0) NSID 1 from core 1: 7967.47 31.12 2007.79 704.87 6698.09 00:09:22.991 PCIE (0000:00:13.0) NSID 1 from core 1: 7967.47 31.12 2007.83 707.39 6061.35 00:09:22.991 PCIE (0000:00:12.0) NSID 1 from core 1: 7967.47 31.12 2007.82 709.36 5945.48 00:09:22.991 PCIE (0000:00:12.0) NSID 2 from core 1: 7967.47 31.12 2007.81 708.57 6421.21 00:09:22.991 PCIE (0000:00:12.0) NSID 3 from core 1: 7967.47 31.12 2007.87 705.85 6652.84 00:09:22.991 ======================================================== 00:09:22.991 Total : 47804.80 186.74 2007.65 680.75 6698.09 00:09:22.991 00:09:23.252 Initializing NVMe Controllers 00:09:23.252 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:23.252 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:23.252 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:23.252 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:23.252 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:23.252 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:23.252 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:23.252 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:23.252 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:23.252 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:23.252 Initialization complete. Launching workers. 00:09:23.252 ======================================================== 00:09:23.252 Latency(us) 00:09:23.252 Device Information : IOPS MiB/s Average min max 00:09:23.252 PCIE (0000:00:10.0) NSID 1 from core 0: 8035.50 31.39 1989.81 701.14 5915.43 00:09:23.252 PCIE (0000:00:11.0) NSID 1 from core 0: 8035.50 31.39 1990.66 723.38 6438.46 00:09:23.252 PCIE (0000:00:13.0) NSID 1 from core 0: 8035.50 31.39 1990.59 582.49 6073.68 00:09:23.252 PCIE (0000:00:12.0) NSID 1 from core 0: 8035.50 31.39 1990.51 507.51 6022.58 00:09:23.252 PCIE (0000:00:12.0) NSID 2 from core 0: 8035.50 31.39 1990.44 423.80 5974.18 00:09:23.252 PCIE (0000:00:12.0) NSID 3 from core 0: 8035.50 31.39 1990.37 367.57 6182.18 00:09:23.252 ======================================================== 00:09:23.252 Total : 48212.99 188.33 1990.40 367.57 6438.46 00:09:23.252 00:09:25.151 Initializing NVMe Controllers 00:09:25.151 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:25.151 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:25.151 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:25.151 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:25.151 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:25.151 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:25.151 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:25.151 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:25.151 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:25.151 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:25.151 Initialization complete. Launching workers. 00:09:25.151 ======================================================== 00:09:25.151 Latency(us) 00:09:25.151 Device Information : IOPS MiB/s Average min max 00:09:25.151 PCIE (0000:00:10.0) NSID 1 from core 2: 4604.89 17.99 3472.48 711.10 13042.88 00:09:25.151 PCIE (0000:00:11.0) NSID 1 from core 2: 4604.89 17.99 3474.03 726.96 12644.14 00:09:25.151 PCIE (0000:00:13.0) NSID 1 from core 2: 4604.89 17.99 3473.82 666.23 12557.61 00:09:25.151 PCIE (0000:00:12.0) NSID 1 from core 2: 4604.89 17.99 3475.35 578.74 12678.54 00:09:25.151 PCIE (0000:00:12.0) NSID 2 from core 2: 4604.89 17.99 3474.46 729.09 12419.67 00:09:25.151 PCIE (0000:00:12.0) NSID 3 from core 2: 4604.89 17.99 3474.59 725.35 12754.30 00:09:25.151 ======================================================== 00:09:25.151 Total : 27629.33 107.93 3474.12 578.74 13042.88 00:09:25.151 00:09:25.151 ************************************ 00:09:25.151 END TEST nvme_multi_secondary 00:09:25.151 ************************************ 00:09:25.151 13:22:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75800 00:09:25.151 13:22:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75801 00:09:25.151 00:09:25.151 real 0m10.605s 00:09:25.151 user 0m18.337s 00:09:25.151 sys 0m0.571s 00:09:25.151 13:22:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.151 13:22:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:25.151 13:22:21 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:25.151 13:22:21 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:25.151 13:22:21 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74745 ]] 00:09:25.151 13:22:21 nvme -- common/autotest_common.sh@1094 -- # kill 74745 00:09:25.151 13:22:21 nvme -- common/autotest_common.sh@1095 -- # wait 74745 00:09:25.152 [2024-11-18 13:22:21.110813] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.110886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.110903] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.110920] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.111684] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.111728] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.111742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.111759] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.112340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.112379] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.112393] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.112411] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.113261] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.113310] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.113325] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 [2024-11-18 13:22:21.113340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75678) is not found. Dropping the request. 00:09:25.152 13:22:21 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:25.152 13:22:21 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:25.152 13:22:21 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:25.152 13:22:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.152 13:22:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.152 13:22:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.152 ************************************ 00:09:25.152 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:25.152 ************************************ 00:09:25.152 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:25.152 * Looking for test storage... 00:09:25.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.152 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.152 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.152 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.410 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:25.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75961 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75961 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75961 ']' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.411 13:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.411 [2024-11-18 13:22:21.456530] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:09:25.411 [2024-11-18 13:22:21.456752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75961 ] 00:09:25.671 [2024-11-18 13:22:21.620940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.671 [2024-11-18 13:22:21.643716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.671 [2024-11-18 13:22:21.644010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.671 [2024-11-18 13:22:21.644154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.671 [2024-11-18 13:22:21.644255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:26.237 nvme0n1 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_RaKeB.txt 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:26.237 true 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.237 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:26.496 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731936142 00:09:26.496 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75984 00:09:26.496 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:26.496 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:26.496 13:22:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:28.396 [2024-11-18 13:22:24.371553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:28.396 [2024-11-18 13:22:24.371830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:28.396 [2024-11-18 13:22:24.371851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:28.396 [2024-11-18 13:22:24.371866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:28.396 [2024-11-18 13:22:24.373675] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:28.396 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75984 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75984 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75984 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_RaKeB.txt 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:28.396 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_RaKeB.txt 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75961 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75961 ']' 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75961 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75961 00:09:28.397 killing process with pid 75961 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75961' 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75961 00:09:28.397 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75961 00:09:28.655 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:28.655 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:28.655 ************************************ 00:09:28.655 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:28.655 ************************************ 00:09:28.655 00:09:28.655 real 0m3.547s 00:09:28.655 user 0m12.674s 00:09:28.655 sys 0m0.471s 00:09:28.655 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.655 13:22:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:28.655 13:22:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:28.655 13:22:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:28.655 13:22:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.655 13:22:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.655 13:22:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.655 ************************************ 00:09:28.655 START TEST nvme_fio 00:09:28.655 ************************************ 00:09:28.655 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:28.655 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:28.655 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:28.913 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:28.913 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:28.913 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:28.913 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.913 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.913 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:28.913 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:28.913 13:22:24 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.913 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:28.913 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:28.913 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:28.913 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:28.913 13:22:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:29.171 13:22:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:29.171 13:22:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:29.171 13:22:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:29.171 13:22:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:29.171 13:22:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:29.428 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:29.428 fio-3.35 00:09:29.428 Starting 1 thread 00:09:35.986 00:09:35.986 test: (groupid=0, jobs=1): err= 0: pid=76113: Mon Nov 18 13:22:32 2024 00:09:35.986 read: IOPS=25.2k, BW=98.3MiB/s (103MB/s)(197MiB/2001msec) 00:09:35.986 slat (nsec): min=3309, max=89877, avg=4776.08, stdev=1816.16 00:09:35.986 clat (usec): min=211, max=9519, avg=2542.65, stdev=629.26 00:09:35.986 lat (usec): min=215, max=9550, avg=2547.43, stdev=630.31 00:09:35.986 clat percentiles (usec): 00:09:35.986 | 1.00th=[ 1942], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:35.986 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:35.986 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2769], 95.00th=[ 3490], 00:09:35.987 | 99.00th=[ 5669], 99.50th=[ 6128], 99.90th=[ 7242], 99.95th=[ 7635], 00:09:35.987 | 99.99th=[ 9372] 00:09:35.987 bw ( KiB/s): min=95616, max=102706, per=99.03%, avg=99678.00, stdev=3656.35, samples=3 00:09:35.987 iops : min=23904, max=25676, avg=24919.33, stdev=913.88, samples=3 00:09:35.987 write: IOPS=25.0k, BW=97.8MiB/s (103MB/s)(196MiB/2001msec); 0 zone resets 00:09:35.987 slat (usec): min=3, max=259, avg= 5.04, stdev= 2.23 00:09:35.987 clat (usec): min=225, max=9457, avg=2541.26, stdev=618.45 00:09:35.987 lat (usec): min=230, max=9470, avg=2546.30, stdev=619.49 00:09:35.987 clat percentiles (usec): 00:09:35.987 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:35.987 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:35.987 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2769], 95.00th=[ 3392], 00:09:35.987 | 99.00th=[ 5669], 99.50th=[ 6128], 99.90th=[ 7308], 99.95th=[ 7635], 00:09:35.987 | 99.99th=[ 9241] 00:09:35.987 bw ( KiB/s): min=96016, max=102698, per=99.55%, avg=99662.00, stdev=3382.51, samples=3 00:09:35.987 iops : min=24004, max=25674, avg=24915.33, stdev=845.40, samples=3 00:09:35.987 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:35.987 lat (msec) : 2=1.14%, 4=94.77%, 10=4.05% 00:09:35.987 cpu : usr=99.00%, sys=0.05%, ctx=30, majf=0, minf=627 00:09:35.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:35.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.987 issued rwts: total=50353,50079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.987 00:09:35.987 Run status group 0 (all jobs): 00:09:35.987 READ: bw=98.3MiB/s (103MB/s), 98.3MiB/s-98.3MiB/s (103MB/s-103MB/s), io=197MiB (206MB), run=2001-2001msec 00:09:35.987 WRITE: bw=97.8MiB/s (103MB/s), 97.8MiB/s-97.8MiB/s (103MB/s-103MB/s), io=196MiB (205MB), run=2001-2001msec 00:09:36.245 ----------------------------------------------------- 00:09:36.245 Suppressions used: 00:09:36.245 count bytes template 00:09:36.245 1 32 /usr/src/fio/parse.c 00:09:36.245 1 8 libtcmalloc_minimal.so 00:09:36.245 ----------------------------------------------------- 00:09:36.245 00:09:36.245 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:36.245 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:36.245 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:36.245 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:36.502 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:36.502 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:36.760 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:36.760 13:22:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:36.761 13:22:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:36.761 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:36.761 fio-3.35 00:09:36.761 Starting 1 thread 00:09:51.634 00:09:51.634 test: (groupid=0, jobs=1): err= 0: pid=76168: Mon Nov 18 13:22:47 2024 00:09:51.634 read: IOPS=24.3k, BW=95.0MiB/s (99.6MB/s)(190MiB/2001msec) 00:09:51.634 slat (usec): min=3, max=190, avg= 4.95, stdev= 1.92 00:09:51.634 clat (usec): min=204, max=7597, avg=2627.57, stdev=570.84 00:09:51.634 lat (usec): min=209, max=7609, avg=2632.51, stdev=571.85 00:09:51.634 clat percentiles (usec): 00:09:51.634 | 1.00th=[ 1909], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:51.634 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:51.634 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2966], 95.00th=[ 3621], 00:09:51.634 | 99.00th=[ 5407], 99.50th=[ 5800], 99.90th=[ 7373], 99.95th=[ 7439], 00:09:51.634 | 99.99th=[ 7504] 00:09:51.634 bw ( KiB/s): min=95696, max=101552, per=100.00%, avg=99013.33, stdev=3004.65, samples=3 00:09:51.634 iops : min=23924, max=25388, avg=24753.33, stdev=751.16, samples=3 00:09:51.634 write: IOPS=24.2k, BW=94.4MiB/s (99.0MB/s)(189MiB/2001msec); 0 zone resets 00:09:51.634 slat (nsec): min=3509, max=53931, avg=5188.56, stdev=1760.96 00:09:51.634 clat (usec): min=223, max=7601, avg=2633.82, stdev=584.53 00:09:51.634 lat (usec): min=227, max=7612, avg=2639.01, stdev=585.56 00:09:51.634 clat percentiles (usec): 00:09:51.634 | 1.00th=[ 1909], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:51.634 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:51.634 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2999], 95.00th=[ 3654], 00:09:51.634 | 99.00th=[ 5407], 99.50th=[ 6128], 99.90th=[ 7373], 99.95th=[ 7439], 00:09:51.634 | 99.99th=[ 7504] 00:09:51.634 bw ( KiB/s): min=96112, max=101520, per=100.00%, avg=99096.00, stdev=2747.15, samples=3 00:09:51.634 iops : min=24028, max=25380, avg=24774.00, stdev=686.79, samples=3 00:09:51.634 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:51.634 lat (msec) : 2=1.34%, 4=95.10%, 10=3.52% 00:09:51.634 cpu : usr=99.30%, sys=0.05%, ctx=7, majf=0, minf=627 00:09:51.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:51.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.634 issued rwts: total=48664,48345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.634 00:09:51.634 Run status group 0 (all jobs): 00:09:51.634 READ: bw=95.0MiB/s (99.6MB/s), 95.0MiB/s-95.0MiB/s (99.6MB/s-99.6MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:51.634 WRITE: bw=94.4MiB/s (99.0MB/s), 94.4MiB/s-94.4MiB/s (99.0MB/s-99.0MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:51.634 ----------------------------------------------------- 00:09:51.634 Suppressions used: 00:09:51.634 count bytes template 00:09:51.634 1 32 /usr/src/fio/parse.c 00:09:51.634 1 8 libtcmalloc_minimal.so 00:09:51.634 ----------------------------------------------------- 00:09:51.634 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:51.634 13:22:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.634 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:51.893 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:51.893 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:51.893 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:51.893 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:51.893 13:22:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:51.893 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:51.893 fio-3.35 00:09:51.893 Starting 1 thread 00:09:58.522 00:09:58.522 test: (groupid=0, jobs=1): err= 0: pid=76230: Mon Nov 18 13:22:53 2024 00:09:58.522 read: IOPS=18.5k, BW=72.1MiB/s (75.6MB/s)(144MiB/2001msec) 00:09:58.522 slat (usec): min=3, max=101, avg= 5.64, stdev= 3.19 00:09:58.522 clat (usec): min=752, max=12007, avg=3450.03, stdev=1360.59 00:09:58.522 lat (usec): min=756, max=12054, avg=3455.67, stdev=1362.02 00:09:58.522 clat percentiles (usec): 00:09:58.522 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2409], 00:09:58.522 | 30.00th=[ 2507], 40.00th=[ 2671], 50.00th=[ 2900], 60.00th=[ 3195], 00:09:58.522 | 70.00th=[ 3851], 80.00th=[ 4686], 90.00th=[ 5538], 95.00th=[ 6259], 00:09:58.522 | 99.00th=[ 7504], 99.50th=[ 7767], 99.90th=[ 9372], 99.95th=[10159], 00:09:58.522 | 99.99th=[11863] 00:09:58.522 bw ( KiB/s): min=65032, max=87032, per=100.00%, avg=75090.67, stdev=11120.18, samples=3 00:09:58.522 iops : min=16258, max=21758, avg=18772.67, stdev=2780.04, samples=3 00:09:58.522 write: IOPS=18.5k, BW=72.1MiB/s (75.6MB/s)(144MiB/2001msec); 0 zone resets 00:09:58.522 slat (nsec): min=3449, max=86497, avg=5769.96, stdev=3239.39 00:09:58.522 clat (usec): min=734, max=11929, avg=3463.19, stdev=1354.49 00:09:58.522 lat (usec): min=739, max=11944, avg=3468.96, stdev=1355.93 00:09:58.522 clat percentiles (usec): 00:09:58.522 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2409], 00:09:58.522 | 30.00th=[ 2540], 40.00th=[ 2704], 50.00th=[ 2900], 60.00th=[ 3228], 00:09:58.522 | 70.00th=[ 3851], 80.00th=[ 4686], 90.00th=[ 5538], 95.00th=[ 6259], 00:09:58.522 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 9372], 99.95th=[ 9765], 00:09:58.522 | 99.99th=[10683] 00:09:58.522 bw ( KiB/s): min=65432, max=86752, per=100.00%, avg=75088.00, stdev=10800.91, samples=3 00:09:58.522 iops : min=16358, max=21688, avg=18772.00, stdev=2700.23, samples=3 00:09:58.522 lat (usec) : 750=0.01%, 1000=0.02% 00:09:58.522 lat (msec) : 2=2.04%, 4=69.93%, 10=27.96%, 20=0.05% 00:09:58.522 cpu : usr=98.75%, sys=0.15%, ctx=5, majf=0, minf=626 00:09:58.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:58.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.522 issued rwts: total=36931,36942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.522 00:09:58.522 Run status group 0 (all jobs): 00:09:58.522 READ: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=144MiB (151MB), run=2001-2001msec 00:09:58.522 WRITE: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=144MiB (151MB), run=2001-2001msec 00:09:58.522 ----------------------------------------------------- 00:09:58.522 Suppressions used: 00:09:58.522 count bytes template 00:09:58.522 1 32 /usr/src/fio/parse.c 00:09:58.522 1 8 libtcmalloc_minimal.so 00:09:58.522 ----------------------------------------------------- 00:09:58.522 00:09:58.522 13:22:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:58.522 13:22:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:58.522 13:22:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:58.522 13:22:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:58.522 13:22:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:58.522 13:22:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:58.522 13:22:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:58.522 13:22:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:58.523 13:22:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:58.523 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:58.523 fio-3.35 00:09:58.523 Starting 1 thread 00:10:05.127 00:10:05.127 test: (groupid=0, jobs=1): err= 0: pid=76285: Mon Nov 18 13:23:00 2024 00:10:05.127 read: IOPS=17.7k, BW=69.3MiB/s (72.7MB/s)(139MiB/2001msec) 00:10:05.127 slat (nsec): min=3402, max=78557, avg=5952.55, stdev=3354.04 00:10:05.127 clat (usec): min=297, max=9441, avg=3586.13, stdev=1408.18 00:10:05.127 lat (usec): min=301, max=9459, avg=3592.09, stdev=1409.75 00:10:05.127 clat percentiles (usec): 00:10:05.127 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:10:05.127 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3359], 00:10:05.127 | 70.00th=[ 4080], 80.00th=[ 4883], 90.00th=[ 5800], 95.00th=[ 6456], 00:10:05.127 | 99.00th=[ 7635], 99.50th=[ 8029], 99.90th=[ 8717], 99.95th=[ 8979], 00:10:05.127 | 99.99th=[ 9372] 00:10:05.127 bw ( KiB/s): min=63848, max=71064, per=94.61%, avg=67154.67, stdev=3645.55, samples=3 00:10:05.127 iops : min=15962, max=17766, avg=16788.67, stdev=911.39, samples=3 00:10:05.127 write: IOPS=17.7k, BW=69.3MiB/s (72.7MB/s)(139MiB/2001msec); 0 zone resets 00:10:05.127 slat (nsec): min=3486, max=87043, avg=6072.75, stdev=3424.62 00:10:05.127 clat (usec): min=313, max=9433, avg=3606.51, stdev=1416.37 00:10:05.127 lat (usec): min=318, max=9439, avg=3612.58, stdev=1417.95 00:10:05.127 clat percentiles (usec): 00:10:05.127 | 1.00th=[ 2008], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:10:05.127 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 2999], 60.00th=[ 3392], 00:10:05.127 | 70.00th=[ 4080], 80.00th=[ 4883], 90.00th=[ 5800], 95.00th=[ 6521], 00:10:05.127 | 99.00th=[ 7635], 99.50th=[ 8094], 99.90th=[ 8848], 99.95th=[ 8979], 00:10:05.127 | 99.99th=[ 9372] 00:10:05.127 bw ( KiB/s): min=63688, max=71128, per=94.53%, avg=67074.67, stdev=3764.54, samples=3 00:10:05.127 iops : min=15922, max=17782, avg=16768.67, stdev=941.13, samples=3 00:10:05.127 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.01% 00:10:05.127 lat (msec) : 2=0.97%, 4=68.04%, 10=30.93% 00:10:05.127 cpu : usr=98.85%, sys=0.00%, ctx=4, majf=0, minf=625 00:10:05.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:05.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.127 issued rwts: total=35506,35496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.127 00:10:05.127 Run status group 0 (all jobs): 00:10:05.127 READ: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=139MiB (145MB), run=2001-2001msec 00:10:05.127 WRITE: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=139MiB (145MB), run=2001-2001msec 00:10:05.127 ----------------------------------------------------- 00:10:05.127 Suppressions used: 00:10:05.127 count bytes template 00:10:05.127 1 32 /usr/src/fio/parse.c 00:10:05.127 1 8 libtcmalloc_minimal.so 00:10:05.127 ----------------------------------------------------- 00:10:05.127 00:10:05.127 ************************************ 00:10:05.127 END TEST nvme_fio 00:10:05.127 ************************************ 00:10:05.127 13:23:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:05.127 13:23:00 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:05.127 00:10:05.127 real 0m35.814s 00:10:05.127 user 0m16.869s 00:10:05.127 sys 0m36.730s 00:10:05.127 13:23:00 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.127 13:23:00 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:05.127 ************************************ 00:10:05.127 END TEST nvme 00:10:05.127 ************************************ 00:10:05.127 00:10:05.127 real 1m44.982s 00:10:05.127 user 3m34.715s 00:10:05.127 sys 0m47.496s 00:10:05.127 13:23:00 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.127 13:23:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.127 13:23:00 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:05.127 13:23:00 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:05.127 13:23:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.127 13:23:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.127 13:23:00 -- common/autotest_common.sh@10 -- # set +x 00:10:05.127 ************************************ 00:10:05.127 START TEST nvme_scc 00:10:05.127 ************************************ 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:05.127 * Looking for test storage... 00:10:05.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.127 13:23:00 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.127 --rc genhtml_branch_coverage=1 00:10:05.127 --rc genhtml_function_coverage=1 00:10:05.127 --rc genhtml_legend=1 00:10:05.127 --rc geninfo_all_blocks=1 00:10:05.127 --rc geninfo_unexecuted_blocks=1 00:10:05.127 00:10:05.127 ' 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.127 --rc genhtml_branch_coverage=1 00:10:05.127 --rc genhtml_function_coverage=1 00:10:05.127 --rc genhtml_legend=1 00:10:05.127 --rc geninfo_all_blocks=1 00:10:05.127 --rc geninfo_unexecuted_blocks=1 00:10:05.127 00:10:05.127 ' 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.127 --rc genhtml_branch_coverage=1 00:10:05.127 --rc genhtml_function_coverage=1 00:10:05.127 --rc genhtml_legend=1 00:10:05.127 --rc geninfo_all_blocks=1 00:10:05.127 --rc geninfo_unexecuted_blocks=1 00:10:05.127 00:10:05.127 ' 00:10:05.127 13:23:00 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.127 --rc genhtml_branch_coverage=1 00:10:05.127 --rc genhtml_function_coverage=1 00:10:05.127 --rc genhtml_legend=1 00:10:05.128 --rc geninfo_all_blocks=1 00:10:05.128 --rc geninfo_unexecuted_blocks=1 00:10:05.128 00:10:05.128 ' 00:10:05.128 13:23:00 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:05.128 13:23:00 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.128 13:23:00 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.128 13:23:00 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.128 13:23:00 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.128 13:23:00 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.128 13:23:00 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.128 13:23:00 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.128 13:23:00 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:05.128 13:23:00 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:05.128 13:23:00 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:05.128 13:23:00 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.128 13:23:00 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:05.128 13:23:00 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:05.128 13:23:00 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:05.128 13:23:00 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:05.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.388 Waiting for block devices as requested 00:10:05.388 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.388 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.388 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.648 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.961 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:10.961 13:23:06 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:10.961 13:23:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.961 13:23:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:10.961 13:23:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.961 13:23:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:10.961 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.962 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:10.963 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.964 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.965 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:10.966 13:23:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.966 13:23:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:10.966 13:23:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.966 13:23:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.966 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:10.967 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:10.968 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.969 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.970 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:10.971 13:23:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.971 13:23:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:10.971 13:23:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.971 13:23:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:10.971 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.972 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:10.973 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:10.974 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:10.975 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.976 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:10.977 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.978 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:10.979 13:23:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.979 13:23:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:10.979 13:23:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.979 13:23:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.979 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:10.980 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.981 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:10.982 13:23:07 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:10.982 13:23:07 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:10.982 13:23:07 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:10.982 13:23:07 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:10.982 13:23:07 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:11.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.124 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.124 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.124 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.124 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.124 13:23:08 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:12.124 13:23:08 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:12.124 13:23:08 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.124 13:23:08 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:12.124 ************************************ 00:10:12.124 START TEST nvme_simple_copy 00:10:12.124 ************************************ 00:10:12.124 13:23:08 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:12.386 Initializing NVMe Controllers 00:10:12.386 Attaching to 0000:00:10.0 00:10:12.386 Controller supports SCC. Attached to 0000:00:10.0 00:10:12.386 Namespace ID: 1 size: 6GB 00:10:12.386 Initialization complete. 00:10:12.386 00:10:12.386 Controller QEMU NVMe Ctrl (12340 ) 00:10:12.386 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:12.386 Namespace Block Size:4096 00:10:12.386 Writing LBAs 0 to 63 with Random Data 00:10:12.386 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:12.386 LBAs matching Written Data: 64 00:10:12.386 00:10:12.386 real 0m0.273s 00:10:12.386 user 0m0.106s 00:10:12.386 sys 0m0.064s 00:10:12.386 13:23:08 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.386 13:23:08 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:12.386 ************************************ 00:10:12.386 END TEST nvme_simple_copy 00:10:12.386 ************************************ 00:10:12.648 ************************************ 00:10:12.648 END TEST nvme_scc 00:10:12.648 ************************************ 00:10:12.648 00:10:12.648 real 0m7.840s 00:10:12.648 user 0m1.101s 00:10:12.648 sys 0m1.393s 00:10:12.648 13:23:08 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.648 13:23:08 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:12.648 13:23:08 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:12.648 13:23:08 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:12.648 13:23:08 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:12.648 13:23:08 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:12.648 13:23:08 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:12.648 13:23:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.648 13:23:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.648 13:23:08 -- common/autotest_common.sh@10 -- # set +x 00:10:12.648 ************************************ 00:10:12.648 START TEST nvme_fdp 00:10:12.648 ************************************ 00:10:12.648 13:23:08 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:12.648 * Looking for test storage... 00:10:12.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:12.648 13:23:08 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.648 13:23:08 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.648 13:23:08 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.648 13:23:08 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.648 13:23:08 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:12.648 13:23:08 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.648 13:23:08 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.648 --rc genhtml_branch_coverage=1 00:10:12.648 --rc genhtml_function_coverage=1 00:10:12.648 --rc genhtml_legend=1 00:10:12.648 --rc geninfo_all_blocks=1 00:10:12.648 --rc geninfo_unexecuted_blocks=1 00:10:12.649 00:10:12.649 ' 00:10:12.649 13:23:08 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.649 --rc genhtml_branch_coverage=1 00:10:12.649 --rc genhtml_function_coverage=1 00:10:12.649 --rc genhtml_legend=1 00:10:12.649 --rc geninfo_all_blocks=1 00:10:12.649 --rc geninfo_unexecuted_blocks=1 00:10:12.649 00:10:12.649 ' 00:10:12.649 13:23:08 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.649 --rc genhtml_branch_coverage=1 00:10:12.649 --rc genhtml_function_coverage=1 00:10:12.649 --rc genhtml_legend=1 00:10:12.649 --rc geninfo_all_blocks=1 00:10:12.649 --rc geninfo_unexecuted_blocks=1 00:10:12.649 00:10:12.649 ' 00:10:12.649 13:23:08 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.649 --rc genhtml_branch_coverage=1 00:10:12.649 --rc genhtml_function_coverage=1 00:10:12.649 --rc genhtml_legend=1 00:10:12.649 --rc geninfo_all_blocks=1 00:10:12.649 --rc geninfo_unexecuted_blocks=1 00:10:12.649 00:10:12.649 ' 00:10:12.649 13:23:08 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.649 13:23:08 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.649 13:23:08 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.649 13:23:08 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.649 13:23:08 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.649 13:23:08 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.649 13:23:08 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.649 13:23:08 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.649 13:23:08 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:12.649 13:23:08 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:12.649 13:23:08 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:12.649 13:23:08 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:12.649 13:23:08 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:13.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:13.220 Waiting for block devices as requested 00:10:13.220 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.220 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.480 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.480 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.782 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:18.782 13:23:14 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:18.782 13:23:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:18.782 13:23:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:18.782 13:23:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.782 13:23:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.782 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.783 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:18.784 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.785 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:18.786 13:23:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:18.786 13:23:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:18.786 13:23:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:18.787 13:23:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.787 13:23:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:18.787 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.788 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.789 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:18.790 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:18.791 13:23:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:18.791 13:23:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:18.791 13:23:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.791 13:23:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:18.791 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.792 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:18.793 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:18.794 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:18.795 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:18.796 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.797 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:18.798 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:18.799 13:23:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:18.799 13:23:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:18.799 13:23:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:18.799 13:23:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.799 13:23:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:19.063 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.064 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:19.065 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:19.066 13:23:14 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:19.066 13:23:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:19.067 13:23:14 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:19.067 13:23:14 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:19.067 13:23:14 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:19.067 13:23:14 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:19.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:20.273 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:20.273 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:20.273 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:20.273 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:20.273 13:23:16 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:20.273 13:23:16 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:20.273 13:23:16 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.273 13:23:16 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:20.273 ************************************ 00:10:20.273 START TEST nvme_flexible_data_placement 00:10:20.273 ************************************ 00:10:20.273 13:23:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:20.534 Initializing NVMe Controllers 00:10:20.535 Attaching to 0000:00:13.0 00:10:20.535 Controller supports FDP Attached to 0000:00:13.0 00:10:20.535 Namespace ID: 1 Endurance Group ID: 1 00:10:20.535 Initialization complete. 00:10:20.535 00:10:20.535 ================================== 00:10:20.535 == FDP tests for Namespace: #01 == 00:10:20.535 ================================== 00:10:20.535 00:10:20.535 Get Feature: FDP: 00:10:20.535 ================= 00:10:20.535 Enabled: Yes 00:10:20.535 FDP configuration Index: 0 00:10:20.535 00:10:20.535 FDP configurations log page 00:10:20.535 =========================== 00:10:20.535 Number of FDP configurations: 1 00:10:20.535 Version: 0 00:10:20.535 Size: 112 00:10:20.535 FDP Configuration Descriptor: 0 00:10:20.535 Descriptor Size: 96 00:10:20.535 Reclaim Group Identifier format: 2 00:10:20.535 FDP Volatile Write Cache: Not Present 00:10:20.535 FDP Configuration: Valid 00:10:20.535 Vendor Specific Size: 0 00:10:20.535 Number of Reclaim Groups: 2 00:10:20.535 Number of Recalim Unit Handles: 8 00:10:20.535 Max Placement Identifiers: 128 00:10:20.535 Number of Namespaces Suppprted: 256 00:10:20.535 Reclaim unit Nominal Size: 6000000 bytes 00:10:20.535 Estimated Reclaim Unit Time Limit: Not Reported 00:10:20.535 RUH Desc #000: RUH Type: Initially Isolated 00:10:20.535 RUH Desc #001: RUH Type: Initially Isolated 00:10:20.535 RUH Desc #002: RUH Type: Initially Isolated 00:10:20.535 RUH Desc #003: RUH Type: Initially Isolated 00:10:20.535 RUH Desc #004: RUH Type: Initially Isolated 00:10:20.535 RUH Desc #005: RUH Type: Initially Isolated 00:10:20.535 RUH Desc #006: RUH Type: Initially Isolated 00:10:20.535 RUH Desc #007: RUH Type: Initially Isolated 00:10:20.535 00:10:20.535 FDP reclaim unit handle usage log page 00:10:20.535 ====================================== 00:10:20.535 Number of Reclaim Unit Handles: 8 00:10:20.535 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:20.535 RUH Usage Desc #001: RUH Attributes: Unused 00:10:20.535 RUH Usage Desc #002: RUH Attributes: Unused 00:10:20.535 RUH Usage Desc #003: RUH Attributes: Unused 00:10:20.535 RUH Usage Desc #004: RUH Attributes: Unused 00:10:20.535 RUH Usage Desc #005: RUH Attributes: Unused 00:10:20.535 RUH Usage Desc #006: RUH Attributes: Unused 00:10:20.535 RUH Usage Desc #007: RUH Attributes: Unused 00:10:20.535 00:10:20.535 FDP statistics log page 00:10:20.535 ======================= 00:10:20.535 Host bytes with metadata written: 1351888896 00:10:20.535 Media bytes with metadata written: 1352073216 00:10:20.535 Media bytes erased: 0 00:10:20.535 00:10:20.535 FDP Reclaim unit handle status 00:10:20.535 ============================== 00:10:20.535 Number of RUHS descriptors: 2 00:10:20.535 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000036bd 00:10:20.535 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:20.535 00:10:20.535 FDP write on placement id: 0 success 00:10:20.535 00:10:20.535 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:20.535 00:10:20.535 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:20.535 00:10:20.535 Get Feature: FDP Events for Placement handle: #0 00:10:20.535 ======================== 00:10:20.535 Number of FDP Events: 6 00:10:20.535 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:20.535 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:20.535 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:20.535 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:20.535 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:20.535 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:20.535 00:10:20.535 FDP events log page 00:10:20.535 =================== 00:10:20.535 Number of FDP events: 1 00:10:20.535 FDP Event #0: 00:10:20.535 Event Type: RU Not Written to Capacity 00:10:20.535 Placement Identifier: Valid 00:10:20.535 NSID: Valid 00:10:20.535 Location: Valid 00:10:20.535 Placement Identifier: 0 00:10:20.535 Event Timestamp: 5 00:10:20.535 Namespace Identifier: 1 00:10:20.535 Reclaim Group Identifier: 0 00:10:20.535 Reclaim Unit Handle Identifier: 0 00:10:20.535 00:10:20.535 FDP test passed 00:10:20.535 ************************************ 00:10:20.535 END TEST nvme_flexible_data_placement 00:10:20.535 00:10:20.535 real 0m0.251s 00:10:20.535 user 0m0.076s 00:10:20.535 sys 0m0.072s 00:10:20.535 13:23:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.535 13:23:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:20.535 ************************************ 00:10:20.535 00:10:20.535 real 0m7.923s 00:10:20.535 user 0m1.119s 00:10:20.535 sys 0m1.477s 00:10:20.535 13:23:16 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.535 13:23:16 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:20.535 ************************************ 00:10:20.535 END TEST nvme_fdp 00:10:20.535 ************************************ 00:10:20.535 13:23:16 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:20.535 13:23:16 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:20.535 13:23:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.535 13:23:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.535 13:23:16 -- common/autotest_common.sh@10 -- # set +x 00:10:20.535 ************************************ 00:10:20.535 START TEST nvme_rpc 00:10:20.535 ************************************ 00:10:20.535 13:23:16 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:20.796 * Looking for test storage... 00:10:20.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.796 13:23:16 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.796 --rc genhtml_branch_coverage=1 00:10:20.796 --rc genhtml_function_coverage=1 00:10:20.796 --rc genhtml_legend=1 00:10:20.796 --rc geninfo_all_blocks=1 00:10:20.796 --rc geninfo_unexecuted_blocks=1 00:10:20.796 00:10:20.796 ' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.796 --rc genhtml_branch_coverage=1 00:10:20.796 --rc genhtml_function_coverage=1 00:10:20.796 --rc genhtml_legend=1 00:10:20.796 --rc geninfo_all_blocks=1 00:10:20.796 --rc geninfo_unexecuted_blocks=1 00:10:20.796 00:10:20.796 ' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.796 --rc genhtml_branch_coverage=1 00:10:20.796 --rc genhtml_function_coverage=1 00:10:20.796 --rc genhtml_legend=1 00:10:20.796 --rc geninfo_all_blocks=1 00:10:20.796 --rc geninfo_unexecuted_blocks=1 00:10:20.796 00:10:20.796 ' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.796 --rc genhtml_branch_coverage=1 00:10:20.796 --rc genhtml_function_coverage=1 00:10:20.796 --rc genhtml_legend=1 00:10:20.796 --rc geninfo_all_blocks=1 00:10:20.796 --rc geninfo_unexecuted_blocks=1 00:10:20.796 00:10:20.796 ' 00:10:20.796 13:23:16 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.796 13:23:16 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:20.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.796 13:23:16 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:20.796 13:23:16 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77654 00:10:20.796 13:23:16 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:20.796 13:23:16 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:20.796 13:23:16 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77654 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77654 ']' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.796 13:23:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.796 [2024-11-18 13:23:16.919927] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:10:20.796 [2024-11-18 13:23:16.920084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77654 ] 00:10:21.056 [2024-11-18 13:23:17.084336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:21.056 [2024-11-18 13:23:17.120493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.056 [2024-11-18 13:23:17.120595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.999 13:23:17 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.999 13:23:17 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:21.999 13:23:17 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:21.999 Nvme0n1 00:10:21.999 13:23:18 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:21.999 13:23:18 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:22.260 request: 00:10:22.260 { 00:10:22.260 "bdev_name": "Nvme0n1", 00:10:22.260 "filename": "non_existing_file", 00:10:22.260 "method": "bdev_nvme_apply_firmware", 00:10:22.260 "req_id": 1 00:10:22.260 } 00:10:22.260 Got JSON-RPC error response 00:10:22.260 response: 00:10:22.260 { 00:10:22.260 "code": -32603, 00:10:22.260 "message": "open file failed." 00:10:22.260 } 00:10:22.260 13:23:18 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:22.260 13:23:18 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:22.260 13:23:18 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:22.534 13:23:18 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:22.534 13:23:18 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77654 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77654 ']' 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77654 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77654 00:10:22.534 killing process with pid 77654 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77654' 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77654 00:10:22.534 13:23:18 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77654 00:10:22.795 ************************************ 00:10:22.795 END TEST nvme_rpc 00:10:22.795 ************************************ 00:10:22.795 00:10:22.795 real 0m2.289s 00:10:22.795 user 0m4.336s 00:10:22.795 sys 0m0.618s 00:10:22.795 13:23:18 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.795 13:23:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.057 13:23:18 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:23.057 13:23:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.057 13:23:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.057 13:23:18 -- common/autotest_common.sh@10 -- # set +x 00:10:23.057 ************************************ 00:10:23.057 START TEST nvme_rpc_timeouts 00:10:23.057 ************************************ 00:10:23.057 13:23:18 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:23.057 * Looking for test storage... 00:10:23.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:23.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.057 13:23:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.057 --rc genhtml_branch_coverage=1 00:10:23.057 --rc genhtml_function_coverage=1 00:10:23.057 --rc genhtml_legend=1 00:10:23.057 --rc geninfo_all_blocks=1 00:10:23.057 --rc geninfo_unexecuted_blocks=1 00:10:23.057 00:10:23.057 ' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.057 --rc genhtml_branch_coverage=1 00:10:23.057 --rc genhtml_function_coverage=1 00:10:23.057 --rc genhtml_legend=1 00:10:23.057 --rc geninfo_all_blocks=1 00:10:23.057 --rc geninfo_unexecuted_blocks=1 00:10:23.057 00:10:23.057 ' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.057 --rc genhtml_branch_coverage=1 00:10:23.057 --rc genhtml_function_coverage=1 00:10:23.057 --rc genhtml_legend=1 00:10:23.057 --rc geninfo_all_blocks=1 00:10:23.057 --rc geninfo_unexecuted_blocks=1 00:10:23.057 00:10:23.057 ' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.057 --rc genhtml_branch_coverage=1 00:10:23.057 --rc genhtml_function_coverage=1 00:10:23.057 --rc genhtml_legend=1 00:10:23.057 --rc geninfo_all_blocks=1 00:10:23.057 --rc geninfo_unexecuted_blocks=1 00:10:23.057 00:10:23.057 ' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.057 13:23:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77708 00:10:23.057 13:23:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77708 00:10:23.057 13:23:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77740 00:10:23.057 13:23:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:23.057 13:23:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77740 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77740 ']' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.057 13:23:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:23.057 13:23:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:23.319 [2024-11-18 13:23:19.202226] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:10:23.319 [2024-11-18 13:23:19.202946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77740 ] 00:10:23.319 [2024-11-18 13:23:19.366881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:23.319 [2024-11-18 13:23:19.398793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.319 [2024-11-18 13:23:19.398916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.263 Checking default timeout settings: 00:10:24.263 13:23:20 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.263 13:23:20 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:24.263 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:24.263 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:24.525 Making settings changes with rpc: 00:10:24.525 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:24.525 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:24.525 Check default vs. modified settings: 00:10:24.525 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:24.525 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77708 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77708 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.098 Setting action_on_timeout is changed as expected. 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77708 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.098 13:23:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77708 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.098 Setting timeout_us is changed as expected. 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77708 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77708 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:25.098 Setting timeout_admin_us is changed as expected. 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77708 /tmp/settings_modified_77708 00:10:25.098 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77740 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77740 ']' 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77740 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77740 00:10:25.099 killing process with pid 77740 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77740' 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77740 00:10:25.099 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77740 00:10:25.360 RPC TIMEOUT SETTING TEST PASSED. 00:10:25.360 13:23:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:25.360 00:10:25.360 real 0m2.410s 00:10:25.360 user 0m4.788s 00:10:25.360 sys 0m0.572s 00:10:25.360 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.360 13:23:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:25.360 ************************************ 00:10:25.360 END TEST nvme_rpc_timeouts 00:10:25.360 ************************************ 00:10:25.360 13:23:21 -- spdk/autotest.sh@239 -- # uname -s 00:10:25.360 13:23:21 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:25.360 13:23:21 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:25.360 13:23:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.360 13:23:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.360 13:23:21 -- common/autotest_common.sh@10 -- # set +x 00:10:25.360 ************************************ 00:10:25.360 START TEST sw_hotplug 00:10:25.360 ************************************ 00:10:25.360 13:23:21 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:25.621 * Looking for test storage... 00:10:25.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.621 13:23:21 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:25.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.621 --rc genhtml_branch_coverage=1 00:10:25.621 --rc genhtml_function_coverage=1 00:10:25.621 --rc genhtml_legend=1 00:10:25.621 --rc geninfo_all_blocks=1 00:10:25.621 --rc geninfo_unexecuted_blocks=1 00:10:25.621 00:10:25.621 ' 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:25.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.621 --rc genhtml_branch_coverage=1 00:10:25.621 --rc genhtml_function_coverage=1 00:10:25.621 --rc genhtml_legend=1 00:10:25.621 --rc geninfo_all_blocks=1 00:10:25.621 --rc geninfo_unexecuted_blocks=1 00:10:25.621 00:10:25.621 ' 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:25.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.621 --rc genhtml_branch_coverage=1 00:10:25.621 --rc genhtml_function_coverage=1 00:10:25.621 --rc genhtml_legend=1 00:10:25.621 --rc geninfo_all_blocks=1 00:10:25.621 --rc geninfo_unexecuted_blocks=1 00:10:25.621 00:10:25.621 ' 00:10:25.621 13:23:21 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:25.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.621 --rc genhtml_branch_coverage=1 00:10:25.621 --rc genhtml_function_coverage=1 00:10:25.621 --rc genhtml_legend=1 00:10:25.621 --rc geninfo_all_blocks=1 00:10:25.621 --rc geninfo_unexecuted_blocks=1 00:10:25.621 00:10:25.621 ' 00:10:25.621 13:23:21 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:25.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.144 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.144 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.144 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.144 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.144 13:23:22 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:26.144 13:23:22 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:26.144 13:23:22 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:26.144 13:23:22 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:26.144 13:23:22 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:26.144 13:23:22 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:26.144 13:23:22 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:26.144 13:23:22 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:26.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.663 Waiting for block devices as requested 00:10:26.663 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.663 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.924 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.924 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.210 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:32.210 13:23:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:32.210 13:23:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:32.471 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:32.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.471 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:32.767 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:33.026 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.026 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:33.026 13:23:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78591 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:33.026 13:23:29 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:33.026 13:23:29 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:33.026 13:23:29 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:33.026 13:23:29 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:33.026 13:23:29 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:33.026 13:23:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:33.286 Initializing NVMe Controllers 00:10:33.286 Attaching to 0000:00:10.0 00:10:33.286 Attaching to 0000:00:11.0 00:10:33.286 Attached to 0000:00:10.0 00:10:33.286 Attached to 0000:00:11.0 00:10:33.286 Initialization complete. Starting I/O... 00:10:33.286 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:33.286 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:33.286 00:10:34.229 QEMU NVMe Ctrl (12340 ): 2774 I/Os completed (+2774) 00:10:34.229 QEMU NVMe Ctrl (12341 ): 2811 I/Os completed (+2811) 00:10:34.229 00:10:35.601 QEMU NVMe Ctrl (12340 ): 6632 I/Os completed (+3858) 00:10:35.601 QEMU NVMe Ctrl (12341 ): 6361 I/Os completed (+3550) 00:10:35.601 00:10:36.541 QEMU NVMe Ctrl (12340 ): 10410 I/Os completed (+3778) 00:10:36.541 QEMU NVMe Ctrl (12341 ): 10052 I/Os completed (+3691) 00:10:36.541 00:10:37.503 QEMU NVMe Ctrl (12340 ): 13729 I/Os completed (+3319) 00:10:37.503 QEMU NVMe Ctrl (12341 ): 13420 I/Os completed (+3368) 00:10:37.503 00:10:38.458 QEMU NVMe Ctrl (12340 ): 16886 I/Os completed (+3157) 00:10:38.458 QEMU NVMe Ctrl (12341 ): 16687 I/Os completed (+3267) 00:10:38.458 00:10:39.033 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:39.033 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.033 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.033 [2024-11-18 13:23:35.132720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:39.033 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:39.033 [2024-11-18 13:23:35.133930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.133985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.134002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.134018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:39.033 [2024-11-18 13:23:35.135476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.135529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.135544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.135560] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.033 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.033 [2024-11-18 13:23:35.157185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:39.033 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:39.033 [2024-11-18 13:23:35.158225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.158271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.158288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.033 [2024-11-18 13:23:35.158301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.294 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:39.294 [2024-11-18 13:23:35.159559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.294 [2024-11-18 13:23:35.159595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.294 [2024-11-18 13:23:35.159614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.294 [2024-11-18 13:23:35.159629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.294 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:11.0/device 00:10:39.294 EAL: Scan for (pci) bus failed. 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:39.294 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.294 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:39.294 Attaching to 0000:00:10.0 00:10:39.294 Attached to 0000:00:10.0 00:10:39.556 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:39.556 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.556 13:23:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:39.556 Attaching to 0000:00:11.0 00:10:39.556 Attached to 0000:00:11.0 00:10:40.494 QEMU NVMe Ctrl (12340 ): 2842 I/Os completed (+2842) 00:10:40.494 QEMU NVMe Ctrl (12341 ): 2586 I/Os completed (+2586) 00:10:40.494 00:10:41.433 QEMU NVMe Ctrl (12340 ): 5643 I/Os completed (+2801) 00:10:41.433 QEMU NVMe Ctrl (12341 ): 5459 I/Os completed (+2873) 00:10:41.433 00:10:42.372 QEMU NVMe Ctrl (12340 ): 8708 I/Os completed (+3065) 00:10:42.372 QEMU NVMe Ctrl (12341 ): 8550 I/Os completed (+3091) 00:10:42.372 00:10:43.313 QEMU NVMe Ctrl (12340 ): 12407 I/Os completed (+3699) 00:10:43.313 QEMU NVMe Ctrl (12341 ): 12263 I/Os completed (+3713) 00:10:43.313 00:10:44.252 QEMU NVMe Ctrl (12340 ): 16123 I/Os completed (+3716) 00:10:44.252 QEMU NVMe Ctrl (12341 ): 16020 I/Os completed (+3757) 00:10:44.252 00:10:45.634 QEMU NVMe Ctrl (12340 ): 19883 I/Os completed (+3760) 00:10:45.634 QEMU NVMe Ctrl (12341 ): 19832 I/Os completed (+3812) 00:10:45.634 00:10:46.205 QEMU NVMe Ctrl (12340 ): 23639 I/Os completed (+3756) 00:10:46.205 QEMU NVMe Ctrl (12341 ): 23583 I/Os completed (+3751) 00:10:46.205 00:10:47.600 QEMU NVMe Ctrl (12340 ): 27287 I/Os completed (+3648) 00:10:47.600 QEMU NVMe Ctrl (12341 ): 27235 I/Os completed (+3652) 00:10:47.600 00:10:48.543 QEMU NVMe Ctrl (12340 ): 30971 I/Os completed (+3684) 00:10:48.543 QEMU NVMe Ctrl (12341 ): 30920 I/Os completed (+3685) 00:10:48.543 00:10:49.485 QEMU NVMe Ctrl (12340 ): 34637 I/Os completed (+3666) 00:10:49.485 QEMU NVMe Ctrl (12341 ): 34586 I/Os completed (+3666) 00:10:49.485 00:10:50.429 QEMU NVMe Ctrl (12340 ): 38333 I/Os completed (+3696) 00:10:50.430 QEMU NVMe Ctrl (12341 ): 38282 I/Os completed (+3696) 00:10:50.430 00:10:51.374 QEMU NVMe Ctrl (12340 ): 42009 I/Os completed (+3676) 00:10:51.374 QEMU NVMe Ctrl (12341 ): 41946 I/Os completed (+3664) 00:10:51.374 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.374 [2024-11-18 13:23:47.449833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.374 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:51.374 [2024-11-18 13:23:47.450895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.451023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.451056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.451140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:51.374 [2024-11-18 13:23:47.452676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.452736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.452765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.452857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device 00:10:51.374 EAL: Scan for (pci) bus failed. 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.374 [2024-11-18 13:23:47.471801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.374 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:51.374 [2024-11-18 13:23:47.472843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.472908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.472942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.472958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:51.374 [2024-11-18 13:23:47.474199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.474277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.474313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 [2024-11-18 13:23:47.474405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:51.374 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.635 Attaching to 0000:00:10.0 00:10:51.635 Attached to 0000:00:10.0 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.635 13:23:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.896 Attaching to 0000:00:11.0 00:10:51.896 Attached to 0000:00:11.0 00:10:52.466 QEMU NVMe Ctrl (12340 ): 2380 I/Os completed (+2380) 00:10:52.466 QEMU NVMe Ctrl (12341 ): 2060 I/Os completed (+2060) 00:10:52.466 00:10:53.407 QEMU NVMe Ctrl (12340 ): 6128 I/Os completed (+3748) 00:10:53.407 QEMU NVMe Ctrl (12341 ): 5808 I/Os completed (+3748) 00:10:53.407 00:10:54.350 QEMU NVMe Ctrl (12340 ): 9852 I/Os completed (+3724) 00:10:54.350 QEMU NVMe Ctrl (12341 ): 9532 I/Os completed (+3724) 00:10:54.350 00:10:55.296 QEMU NVMe Ctrl (12340 ): 13576 I/Os completed (+3724) 00:10:55.296 QEMU NVMe Ctrl (12341 ): 13256 I/Os completed (+3724) 00:10:55.296 00:10:56.240 QEMU NVMe Ctrl (12340 ): 17324 I/Os completed (+3748) 00:10:56.240 QEMU NVMe Ctrl (12341 ): 17004 I/Os completed (+3748) 00:10:56.240 00:10:57.629 QEMU NVMe Ctrl (12340 ): 21044 I/Os completed (+3720) 00:10:57.629 QEMU NVMe Ctrl (12341 ): 20724 I/Os completed (+3720) 00:10:57.629 00:10:58.569 QEMU NVMe Ctrl (12340 ): 25089 I/Os completed (+4045) 00:10:58.569 QEMU NVMe Ctrl (12341 ): 24479 I/Os completed (+3755) 00:10:58.569 00:10:59.512 QEMU NVMe Ctrl (12340 ): 29161 I/Os completed (+4072) 00:10:59.512 QEMU NVMe Ctrl (12341 ): 28538 I/Os completed (+4059) 00:10:59.512 00:11:00.447 QEMU NVMe Ctrl (12340 ): 33169 I/Os completed (+4008) 00:11:00.447 QEMU NVMe Ctrl (12341 ): 32289 I/Os completed (+3751) 00:11:00.447 00:11:01.390 QEMU NVMe Ctrl (12340 ): 36934 I/Os completed (+3765) 00:11:01.390 QEMU NVMe Ctrl (12341 ): 36002 I/Os completed (+3713) 00:11:01.390 00:11:02.335 QEMU NVMe Ctrl (12340 ): 40657 I/Os completed (+3723) 00:11:02.335 QEMU NVMe Ctrl (12341 ): 39835 I/Os completed (+3833) 00:11:02.335 00:11:03.298 QEMU NVMe Ctrl (12340 ): 44365 I/Os completed (+3708) 00:11:03.298 QEMU NVMe Ctrl (12341 ): 43546 I/Os completed (+3711) 00:11:03.298 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.870 [2024-11-18 13:23:59.767499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:03.870 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:03.870 [2024-11-18 13:23:59.768679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.768793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.768829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.768892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:03.870 [2024-11-18 13:23:59.770466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.770568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.770603] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.770633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 EAL: Cannot open sysfs resource 00:11:03.870 EAL: pci_scan_one(): cannot parse resource 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.870 EAL: Scan for (pci) bus failed. 00:11:03.870 [2024-11-18 13:23:59.787449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:03.870 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:03.870 [2024-11-18 13:23:59.788427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.788462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.788478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.788492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:03.870 [2024-11-18 13:23:59.789522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.789554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.789569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 [2024-11-18 13:23:59.789581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:03.870 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:03.870 EAL: Scan for (pci) bus failed. 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.870 13:23:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.870 Attaching to 0000:00:10.0 00:11:03.870 Attached to 0000:00:10.0 00:11:04.131 13:24:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.131 13:24:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.131 13:24:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.131 Attaching to 0000:00:11.0 00:11:04.131 Attached to 0000:00:11.0 00:11:04.131 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:04.132 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:04.132 [2024-11-18 13:24:00.065392] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:16.357 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:16.357 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.357 13:24:12 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.93 00:11:16.357 13:24:12 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.93 00:11:16.357 13:24:12 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:16.357 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.93 00:11:16.357 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.93 2 00:11:16.357 remove_attach_helper took 42.93s to complete (handling 2 nvme drive(s)) 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:22.924 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78591 00:11:22.925 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78591) - No such process 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78591 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79135 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79135 00:11:22.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 79135 ']' 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.925 [2024-11-18 13:24:18.144537] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:11:22.925 [2024-11-18 13:24:18.144690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79135 ] 00:11:22.925 [2024-11-18 13:24:18.304459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.925 [2024-11-18 13:24:18.332736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:22.925 13:24:18 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.925 13:24:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.925 13:24:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:22.925 13:24:19 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:22.925 13:24:19 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:22.925 13:24:19 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:22.925 13:24:19 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:22.925 13:24:19 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:22.925 13:24:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.567 13:24:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.567 13:24:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.567 13:24:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.567 [2024-11-18 13:24:25.104114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:29.567 [2024-11-18 13:24:25.105203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.105236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.105248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 [2024-11-18 13:24:25.105261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.105270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.105277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 [2024-11-18 13:24:25.105287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.105293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.105301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 [2024-11-18 13:24:25.105308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.105315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.105322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 [2024-11-18 13:24:25.504107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:29.567 [2024-11-18 13:24:25.505289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.505318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.505327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 [2024-11-18 13:24:25.505339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.505346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.505354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 [2024-11-18 13:24:25.505360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.505368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.505374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 [2024-11-18 13:24:25.505383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.567 [2024-11-18 13:24:25.505389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.567 [2024-11-18 13:24:25.505396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.567 13:24:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.567 13:24:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.567 13:24:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:29.567 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.826 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.027 13:24:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.027 13:24:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.027 13:24:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.027 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.028 13:24:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.028 13:24:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.028 13:24:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:42.028 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.028 [2024-11-18 13:24:38.004289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:42.028 [2024-11-18 13:24:38.005335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.028 [2024-11-18 13:24:38.005366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.028 [2024-11-18 13:24:38.005378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.028 [2024-11-18 13:24:38.005391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.028 [2024-11-18 13:24:38.005399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.028 [2024-11-18 13:24:38.005407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.028 [2024-11-18 13:24:38.005414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.028 [2024-11-18 13:24:38.005421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.028 [2024-11-18 13:24:38.005429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.028 [2024-11-18 13:24:38.005435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.028 [2024-11-18 13:24:38.005442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.028 [2024-11-18 13:24:38.005449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.286 [2024-11-18 13:24:38.404297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:42.286 [2024-11-18 13:24:38.405298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.286 [2024-11-18 13:24:38.405408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.286 [2024-11-18 13:24:38.405421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.286 [2024-11-18 13:24:38.405432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.286 [2024-11-18 13:24:38.405439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.286 [2024-11-18 13:24:38.405447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.286 [2024-11-18 13:24:38.405454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.286 [2024-11-18 13:24:38.405462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.286 [2024-11-18 13:24:38.405469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.286 [2024-11-18 13:24:38.405476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.286 [2024-11-18 13:24:38.405482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.286 [2024-11-18 13:24:38.405490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.545 13:24:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.545 13:24:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.545 13:24:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.545 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.832 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.041 13:24:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.041 13:24:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.041 13:24:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.041 13:24:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.041 13:24:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.041 13:24:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:55.041 13:24:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.041 [2024-11-18 13:24:50.904478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:55.041 [2024-11-18 13:24:50.905562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.041 [2024-11-18 13:24:50.905589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.041 [2024-11-18 13:24:50.905604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.041 [2024-11-18 13:24:50.905617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.041 [2024-11-18 13:24:50.905626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.041 [2024-11-18 13:24:50.905632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.041 [2024-11-18 13:24:50.905640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.041 [2024-11-18 13:24:50.905647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.041 [2024-11-18 13:24:50.905655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.041 [2024-11-18 13:24:50.905661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.041 [2024-11-18 13:24:50.905668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.041 [2024-11-18 13:24:50.905675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.300 [2024-11-18 13:24:51.304483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:55.300 [2024-11-18 13:24:51.305465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.300 [2024-11-18 13:24:51.305495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.300 [2024-11-18 13:24:51.305504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.300 [2024-11-18 13:24:51.305516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.300 [2024-11-18 13:24:51.305522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.300 [2024-11-18 13:24:51.305531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.300 [2024-11-18 13:24:51.305538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.300 [2024-11-18 13:24:51.305546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.300 [2024-11-18 13:24:51.305552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.300 [2024-11-18 13:24:51.305559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.300 [2024-11-18 13:24:51.305565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.300 [2024-11-18 13:24:51.305572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.300 13:24:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.300 13:24:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.300 13:24:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.300 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.557 13:24:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.66 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.66 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.66 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.66 2 00:12:07.836 remove_attach_helper took 44.66s to complete (handling 2 nvme drive(s)) 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:07.836 13:25:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:07.836 13:25:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.404 13:25:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.404 13:25:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.404 13:25:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:14.404 13:25:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.404 [2024-11-18 13:25:09.795536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:14.404 [2024-11-18 13:25:09.796474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:09.796502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:09.796515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 [2024-11-18 13:25:09.796528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:09.796538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:09.796546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 [2024-11-18 13:25:09.796554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:09.796562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:09.796575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 [2024-11-18 13:25:09.796582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:09.796591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:09.796598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 [2024-11-18 13:25:10.195540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:14.404 [2024-11-18 13:25:10.196291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:10.196321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:10.196331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 [2024-11-18 13:25:10.196344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:10.196351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:10.196360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 [2024-11-18 13:25:10.196367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:10.196374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:10.196382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 [2024-11-18 13:25:10.196389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.404 [2024-11-18 13:25:10.196396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.404 [2024-11-18 13:25:10.196406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.404 13:25:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.404 13:25:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.404 13:25:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.404 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.405 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:14.405 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:14.405 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.405 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.405 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.405 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:14.664 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:14.664 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.664 13:25:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.870 13:25:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.870 13:25:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.870 13:25:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.870 [2024-11-18 13:25:22.595729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:26.870 [2024-11-18 13:25:22.596743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.870 [2024-11-18 13:25:22.596851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.870 [2024-11-18 13:25:22.596928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.870 [2024-11-18 13:25:22.596979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.870 [2024-11-18 13:25:22.597000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.870 [2024-11-18 13:25:22.597024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.870 [2024-11-18 13:25:22.597049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.870 [2024-11-18 13:25:22.597153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.870 [2024-11-18 13:25:22.597193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.870 [2024-11-18 13:25:22.597218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.870 [2024-11-18 13:25:22.597238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.870 [2024-11-18 13:25:22.597261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.870 13:25:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.870 13:25:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.870 13:25:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:26.870 13:25:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:26.870 [2024-11-18 13:25:22.995730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:26.870 [2024-11-18 13:25:22.996456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.129 [2024-11-18 13:25:22.996550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.129 [2024-11-18 13:25:22.996563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.129 [2024-11-18 13:25:22.996574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.129 [2024-11-18 13:25:22.996581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.129 [2024-11-18 13:25:22.996589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.129 [2024-11-18 13:25:22.996596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.129 [2024-11-18 13:25:22.996604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.129 [2024-11-18 13:25:22.996610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.129 [2024-11-18 13:25:22.996618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.129 [2024-11-18 13:25:22.996624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.129 [2024-11-18 13:25:22.996632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.129 13:25:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.129 13:25:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.129 13:25:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:27.129 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.388 13:25:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.588 13:25:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.588 13:25:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.588 13:25:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.588 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.588 [2024-11-18 13:25:35.495920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:39.588 [2024-11-18 13:25:35.496859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.588 [2024-11-18 13:25:35.496955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.589 [2024-11-18 13:25:35.497033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.589 [2024-11-18 13:25:35.497082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.589 [2024-11-18 13:25:35.497105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.589 [2024-11-18 13:25:35.497149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.589 [2024-11-18 13:25:35.497188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.589 [2024-11-18 13:25:35.497206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.589 [2024-11-18 13:25:35.497252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.589 [2024-11-18 13:25:35.497276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.589 [2024-11-18 13:25:35.497294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.589 [2024-11-18 13:25:35.497351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.589 13:25:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.589 13:25:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.589 13:25:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:39.589 13:25:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:39.847 [2024-11-18 13:25:35.895926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:39.847 [2024-11-18 13:25:35.896740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.847 [2024-11-18 13:25:35.896834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.847 [2024-11-18 13:25:35.896895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.847 [2024-11-18 13:25:35.897060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.847 [2024-11-18 13:25:35.897080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.847 [2024-11-18 13:25:35.897107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.847 [2024-11-18 13:25:35.897130] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.847 [2024-11-18 13:25:35.897149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.847 [2024-11-18 13:25:35.897211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.847 [2024-11-18 13:25:35.897237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.847 [2024-11-18 13:25:35.897254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.847 [2024-11-18 13:25:35.897277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.106 13:25:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.106 13:25:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.106 13:25:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.106 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:40.365 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:40.365 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.365 13:25:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:12:52.571 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79135 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 79135 ']' 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 79135 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79135 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.571 killing process with pid 79135 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79135' 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@973 -- # kill 79135 00:12:52.571 13:25:48 sw_hotplug -- common/autotest_common.sh@978 -- # wait 79135 00:12:52.571 13:25:48 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:52.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.406 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:53.406 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:53.406 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:53.406 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:53.406 00:12:53.406 real 2m28.081s 00:12:53.406 user 1m47.434s 00:12:53.406 sys 0m19.142s 00:12:53.406 13:25:49 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.406 ************************************ 00:12:53.406 END TEST sw_hotplug 00:12:53.406 13:25:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.406 ************************************ 00:12:53.667 13:25:49 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:53.667 13:25:49 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:53.667 13:25:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.667 13:25:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.667 13:25:49 -- common/autotest_common.sh@10 -- # set +x 00:12:53.667 ************************************ 00:12:53.667 START TEST nvme_xnvme 00:12:53.667 ************************************ 00:12:53.667 13:25:49 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:53.667 * Looking for test storage... 00:12:53.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:53.667 13:25:49 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:53.667 13:25:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:53.667 13:25:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:53.667 13:25:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:53.667 13:25:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:53.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.668 --rc genhtml_branch_coverage=1 00:12:53.668 --rc genhtml_function_coverage=1 00:12:53.668 --rc genhtml_legend=1 00:12:53.668 --rc geninfo_all_blocks=1 00:12:53.668 --rc geninfo_unexecuted_blocks=1 00:12:53.668 00:12:53.668 ' 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:53.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.668 --rc genhtml_branch_coverage=1 00:12:53.668 --rc genhtml_function_coverage=1 00:12:53.668 --rc genhtml_legend=1 00:12:53.668 --rc geninfo_all_blocks=1 00:12:53.668 --rc geninfo_unexecuted_blocks=1 00:12:53.668 00:12:53.668 ' 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:53.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.668 --rc genhtml_branch_coverage=1 00:12:53.668 --rc genhtml_function_coverage=1 00:12:53.668 --rc genhtml_legend=1 00:12:53.668 --rc geninfo_all_blocks=1 00:12:53.668 --rc geninfo_unexecuted_blocks=1 00:12:53.668 00:12:53.668 ' 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:53.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.668 --rc genhtml_branch_coverage=1 00:12:53.668 --rc genhtml_function_coverage=1 00:12:53.668 --rc genhtml_legend=1 00:12:53.668 --rc geninfo_all_blocks=1 00:12:53.668 --rc geninfo_unexecuted_blocks=1 00:12:53.668 00:12:53.668 ' 00:12:53.668 13:25:49 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.668 13:25:49 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.668 13:25:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.668 13:25:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.668 13:25:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.668 13:25:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:53.668 13:25:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.668 13:25:49 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.668 13:25:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.668 ************************************ 00:12:53.668 START TEST xnvme_to_malloc_dd_copy 00:12:53.668 ************************************ 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:53.668 13:25:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.929 { 00:12:53.929 "subsystems": [ 00:12:53.929 { 00:12:53.929 "subsystem": "bdev", 00:12:53.929 "config": [ 00:12:53.929 { 00:12:53.929 "params": { 00:12:53.929 "block_size": 512, 00:12:53.929 "num_blocks": 2097152, 00:12:53.929 "name": "malloc0" 00:12:53.929 }, 00:12:53.929 "method": "bdev_malloc_create" 00:12:53.929 }, 00:12:53.929 { 00:12:53.929 "params": { 00:12:53.929 "io_mechanism": "libaio", 00:12:53.929 "filename": "/dev/nullb0", 00:12:53.929 "name": "null0" 00:12:53.929 }, 00:12:53.929 "method": "bdev_xnvme_create" 00:12:53.929 }, 00:12:53.929 { 00:12:53.929 "method": "bdev_wait_for_examine" 00:12:53.929 } 00:12:53.929 ] 00:12:53.929 } 00:12:53.929 ] 00:12:53.929 } 00:12:53.929 [2024-11-18 13:25:49.826331] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:12:53.929 [2024-11-18 13:25:49.826446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80486 ] 00:12:53.930 [2024-11-18 13:25:49.985087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.930 [2024-11-18 13:25:50.003804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.315  [2024-11-18T13:25:52.388Z] Copying: 231/1024 [MB] (231 MBps) [2024-11-18T13:25:53.332Z] Copying: 462/1024 [MB] (231 MBps) [2024-11-18T13:25:54.267Z] Copying: 693/1024 [MB] (231 MBps) [2024-11-18T13:25:54.526Z] Copying: 989/1024 [MB] (295 MBps) [2024-11-18T13:25:54.786Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:12:58.658 00:12:58.658 13:25:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:58.658 13:25:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:58.658 13:25:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:58.658 13:25:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:58.658 { 00:12:58.658 "subsystems": [ 00:12:58.658 { 00:12:58.658 "subsystem": "bdev", 00:12:58.658 "config": [ 00:12:58.658 { 00:12:58.658 "params": { 00:12:58.658 "block_size": 512, 00:12:58.658 "num_blocks": 2097152, 00:12:58.658 "name": "malloc0" 00:12:58.658 }, 00:12:58.658 "method": "bdev_malloc_create" 00:12:58.658 }, 00:12:58.658 { 00:12:58.658 "params": { 00:12:58.658 "io_mechanism": "libaio", 00:12:58.658 "filename": "/dev/nullb0", 00:12:58.658 "name": "null0" 00:12:58.658 }, 00:12:58.658 "method": "bdev_xnvme_create" 00:12:58.658 }, 00:12:58.658 { 00:12:58.658 "method": "bdev_wait_for_examine" 00:12:58.658 } 00:12:58.658 ] 00:12:58.658 } 00:12:58.658 ] 00:12:58.658 } 00:12:58.658 [2024-11-18 13:25:54.737248] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:12:58.658 [2024-11-18 13:25:54.737342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80551 ] 00:12:58.918 [2024-11-18 13:25:54.881887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.918 [2024-11-18 13:25:54.897947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.292  [2024-11-18T13:25:57.355Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-18T13:25:58.295Z] Copying: 623/1024 [MB] (311 MBps) [2024-11-18T13:25:58.554Z] Copying: 935/1024 [MB] (311 MBps) [2024-11-18T13:25:58.814Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:13:02.686 00:13:02.686 13:25:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:02.686 13:25:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:02.686 13:25:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:02.686 13:25:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:02.686 13:25:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:02.686 13:25:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:02.686 { 00:13:02.686 "subsystems": [ 00:13:02.686 { 00:13:02.686 "subsystem": "bdev", 00:13:02.686 "config": [ 00:13:02.686 { 00:13:02.686 "params": { 00:13:02.686 "block_size": 512, 00:13:02.686 "num_blocks": 2097152, 00:13:02.686 "name": "malloc0" 00:13:02.686 }, 00:13:02.686 "method": "bdev_malloc_create" 00:13:02.686 }, 00:13:02.686 { 00:13:02.686 "params": { 00:13:02.686 "io_mechanism": "io_uring", 00:13:02.686 "filename": "/dev/nullb0", 00:13:02.686 "name": "null0" 00:13:02.686 }, 00:13:02.686 "method": "bdev_xnvme_create" 00:13:02.686 }, 00:13:02.686 { 00:13:02.686 "method": "bdev_wait_for_examine" 00:13:02.686 } 00:13:02.686 ] 00:13:02.686 } 00:13:02.686 ] 00:13:02.686 } 00:13:02.686 [2024-11-18 13:25:58.792234] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:02.686 [2024-11-18 13:25:58.792344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80600 ] 00:13:02.946 [2024-11-18 13:25:58.944915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.946 [2024-11-18 13:25:58.961068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.323  [2024-11-18T13:26:01.475Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-18T13:26:02.410Z] Copying: 636/1024 [MB] (318 MBps) [2024-11-18T13:26:02.410Z] Copying: 955/1024 [MB] (319 MBps) [2024-11-18T13:26:02.668Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:13:06.540 00:13:06.799 13:26:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:06.799 13:26:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:06.799 13:26:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:06.799 13:26:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:06.799 { 00:13:06.799 "subsystems": [ 00:13:06.799 { 00:13:06.799 "subsystem": "bdev", 00:13:06.799 "config": [ 00:13:06.799 { 00:13:06.799 "params": { 00:13:06.799 "block_size": 512, 00:13:06.799 "num_blocks": 2097152, 00:13:06.799 "name": "malloc0" 00:13:06.799 }, 00:13:06.799 "method": "bdev_malloc_create" 00:13:06.799 }, 00:13:06.799 { 00:13:06.799 "params": { 00:13:06.799 "io_mechanism": "io_uring", 00:13:06.799 "filename": "/dev/nullb0", 00:13:06.799 "name": "null0" 00:13:06.799 }, 00:13:06.799 "method": "bdev_xnvme_create" 00:13:06.799 }, 00:13:06.799 { 00:13:06.799 "method": "bdev_wait_for_examine" 00:13:06.799 } 00:13:06.799 ] 00:13:06.799 } 00:13:06.799 ] 00:13:06.799 } 00:13:06.799 [2024-11-18 13:26:02.736842] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:06.799 [2024-11-18 13:26:02.736967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80654 ] 00:13:06.799 [2024-11-18 13:26:02.889472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.800 [2024-11-18 13:26:02.905410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.179  [2024-11-18T13:26:05.242Z] Copying: 323/1024 [MB] (323 MBps) [2024-11-18T13:26:06.179Z] Copying: 647/1024 [MB] (324 MBps) [2024-11-18T13:26:06.438Z] Copying: 971/1024 [MB] (323 MBps) [2024-11-18T13:26:06.697Z] Copying: 1024/1024 [MB] (average 323 MBps) 00:13:10.569 00:13:10.569 13:26:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:10.569 13:26:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:10.569 00:13:10.569 real 0m16.871s 00:13:10.569 user 0m14.079s 00:13:10.569 sys 0m2.311s 00:13:10.569 13:26:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.569 13:26:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:10.569 ************************************ 00:13:10.569 END TEST xnvme_to_malloc_dd_copy 00:13:10.569 ************************************ 00:13:10.569 13:26:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:10.569 13:26:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.569 13:26:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.569 13:26:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.569 ************************************ 00:13:10.569 START TEST xnvme_bdevperf 00:13:10.569 ************************************ 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:10.569 13:26:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.569 { 00:13:10.569 "subsystems": [ 00:13:10.569 { 00:13:10.569 "subsystem": "bdev", 00:13:10.569 "config": [ 00:13:10.570 { 00:13:10.570 "params": { 00:13:10.570 "io_mechanism": "libaio", 00:13:10.570 "filename": "/dev/nullb0", 00:13:10.570 "name": "null0" 00:13:10.570 }, 00:13:10.570 "method": "bdev_xnvme_create" 00:13:10.570 }, 00:13:10.570 { 00:13:10.570 "method": "bdev_wait_for_examine" 00:13:10.570 } 00:13:10.570 ] 00:13:10.570 } 00:13:10.570 ] 00:13:10.570 } 00:13:10.829 [2024-11-18 13:26:06.720377] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:10.829 [2024-11-18 13:26:06.720482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80731 ] 00:13:10.829 [2024-11-18 13:26:06.877559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.829 [2024-11-18 13:26:06.895790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.088 Running I/O for 5 seconds... 00:13:12.962 153792.00 IOPS, 600.75 MiB/s [2024-11-18T13:26:10.026Z] 180320.00 IOPS, 704.38 MiB/s [2024-11-18T13:26:11.402Z] 188757.33 IOPS, 737.33 MiB/s [2024-11-18T13:26:12.339Z] 193088.00 IOPS, 754.25 MiB/s [2024-11-18T13:26:12.339Z] 195724.80 IOPS, 764.55 MiB/s 00:13:16.211 Latency(us) 00:13:16.211 [2024-11-18T13:26:12.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.211 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:16.211 null0 : 5.00 195667.94 764.33 0.00 0.00 324.66 300.90 2558.42 00:13:16.211 [2024-11-18T13:26:12.339Z] =================================================================================================================== 00:13:16.211 [2024-11-18T13:26:12.339Z] Total : 195667.94 764.33 0.00 0.00 324.66 300.90 2558.42 00:13:16.211 13:26:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:16.211 13:26:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:16.211 13:26:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:16.211 13:26:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:16.211 13:26:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:16.211 13:26:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.211 { 00:13:16.211 "subsystems": [ 00:13:16.211 { 00:13:16.211 "subsystem": "bdev", 00:13:16.211 "config": [ 00:13:16.211 { 00:13:16.211 "params": { 00:13:16.211 "io_mechanism": "io_uring", 00:13:16.211 "filename": "/dev/nullb0", 00:13:16.211 "name": "null0" 00:13:16.211 }, 00:13:16.211 "method": "bdev_xnvme_create" 00:13:16.211 }, 00:13:16.211 { 00:13:16.211 "method": "bdev_wait_for_examine" 00:13:16.211 } 00:13:16.211 ] 00:13:16.211 } 00:13:16.211 ] 00:13:16.211 } 00:13:16.211 [2024-11-18 13:26:12.179327] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:16.211 [2024-11-18 13:26:12.179430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80794 ] 00:13:16.211 [2024-11-18 13:26:12.332847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.469 [2024-11-18 13:26:12.349349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.469 Running I/O for 5 seconds... 00:13:18.338 238400.00 IOPS, 931.25 MiB/s [2024-11-18T13:26:15.838Z] 238464.00 IOPS, 931.50 MiB/s [2024-11-18T13:26:16.772Z] 238464.00 IOPS, 931.50 MiB/s [2024-11-18T13:26:17.708Z] 238480.00 IOPS, 931.56 MiB/s [2024-11-18T13:26:17.708Z] 238451.20 IOPS, 931.45 MiB/s 00:13:21.580 Latency(us) 00:13:21.580 [2024-11-18T13:26:17.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.580 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:21.580 null0 : 5.00 238380.61 931.17 0.00 0.00 266.10 239.46 1474.56 00:13:21.580 [2024-11-18T13:26:17.708Z] =================================================================================================================== 00:13:21.580 [2024-11-18T13:26:17.708Z] Total : 238380.61 931.17 0.00 0.00 266.10 239.46 1474.56 00:13:21.580 13:26:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:21.580 13:26:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:21.580 00:13:21.580 real 0m10.936s 00:13:21.580 user 0m8.536s 00:13:21.580 sys 0m2.174s 00:13:21.580 13:26:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.580 ************************************ 00:13:21.580 END TEST xnvme_bdevperf 00:13:21.580 ************************************ 00:13:21.580 13:26:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:21.580 00:13:21.580 real 0m28.044s 00:13:21.580 user 0m22.731s 00:13:21.580 sys 0m4.592s 00:13:21.580 13:26:17 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.580 13:26:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.580 ************************************ 00:13:21.580 END TEST nvme_xnvme 00:13:21.580 ************************************ 00:13:21.580 13:26:17 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:21.580 13:26:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.580 13:26:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.580 13:26:17 -- common/autotest_common.sh@10 -- # set +x 00:13:21.580 ************************************ 00:13:21.580 START TEST blockdev_xnvme 00:13:21.580 ************************************ 00:13:21.580 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:21.842 * Looking for test storage... 00:13:21.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.842 13:26:17 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:21.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.842 --rc genhtml_branch_coverage=1 00:13:21.842 --rc genhtml_function_coverage=1 00:13:21.842 --rc genhtml_legend=1 00:13:21.842 --rc geninfo_all_blocks=1 00:13:21.842 --rc geninfo_unexecuted_blocks=1 00:13:21.842 00:13:21.842 ' 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:21.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.842 --rc genhtml_branch_coverage=1 00:13:21.842 --rc genhtml_function_coverage=1 00:13:21.842 --rc genhtml_legend=1 00:13:21.842 --rc geninfo_all_blocks=1 00:13:21.842 --rc geninfo_unexecuted_blocks=1 00:13:21.842 00:13:21.842 ' 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:21.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.842 --rc genhtml_branch_coverage=1 00:13:21.842 --rc genhtml_function_coverage=1 00:13:21.842 --rc genhtml_legend=1 00:13:21.842 --rc geninfo_all_blocks=1 00:13:21.842 --rc geninfo_unexecuted_blocks=1 00:13:21.842 00:13:21.842 ' 00:13:21.842 13:26:17 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:21.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.842 --rc genhtml_branch_coverage=1 00:13:21.842 --rc genhtml_function_coverage=1 00:13:21.842 --rc genhtml_legend=1 00:13:21.842 --rc geninfo_all_blocks=1 00:13:21.842 --rc geninfo_unexecuted_blocks=1 00:13:21.842 00:13:21.842 ' 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:21.842 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80933 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80933 00:13:21.843 13:26:17 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 80933 ']' 00:13:21.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.843 13:26:17 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.843 13:26:17 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.843 13:26:17 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.843 13:26:17 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.843 13:26:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.843 13:26:17 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:21.843 [2024-11-18 13:26:17.877052] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:21.843 [2024-11-18 13:26:17.877218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80933 ] 00:13:22.101 [2024-11-18 13:26:18.033055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.101 [2024-11-18 13:26:18.058118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.667 13:26:18 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.667 13:26:18 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:13:22.667 13:26:18 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:22.667 13:26:18 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:22.667 13:26:18 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:22.667 13:26:18 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:22.667 13:26:18 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:22.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:23.183 Waiting for block devices as requested 00:13:23.183 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.183 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.183 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.440 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.806 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:28.806 nvme0n1 00:13:28.806 nvme1n1 00:13:28.806 nvme2n1 00:13:28.806 nvme2n2 00:13:28.806 nvme2n3 00:13:28.806 nvme3n1 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.806 13:26:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:28.806 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a7a872ff-89aa-4108-9d6a-225210bdcd32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a7a872ff-89aa-4108-9d6a-225210bdcd32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "26a7e4e1-48f0-45d4-ac29-582941ecdd5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "26a7e4e1-48f0-45d4-ac29-582941ecdd5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "aac4586a-e02c-414c-b9e2-ec16c4ae6187"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aac4586a-e02c-414c-b9e2-ec16c4ae6187",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "527324ea-b24b-452c-95d4-347409f965c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "527324ea-b24b-452c-95d4-347409f965c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ca474b3d-9952-40da-8128-ce7df007f755"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca474b3d-9952-40da-8128-ce7df007f755",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "01fa23b7-925d-43b6-a150-6889b85bee1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "01fa23b7-925d-43b6-a150-6889b85bee1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80933 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 80933 ']' 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 80933 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80933 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.807 killing process with pid 80933 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80933' 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 80933 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 80933 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:28.807 13:26:24 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.807 13:26:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.807 ************************************ 00:13:28.807 START TEST bdev_hello_world 00:13:28.807 ************************************ 00:13:28.807 13:26:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:28.807 [2024-11-18 13:26:24.883465] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:28.807 [2024-11-18 13:26:24.883576] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81281 ] 00:13:29.066 [2024-11-18 13:26:25.036801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.066 [2024-11-18 13:26:25.053364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.327 [2024-11-18 13:26:25.215315] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:29.327 [2024-11-18 13:26:25.215361] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:29.327 [2024-11-18 13:26:25.215385] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:29.327 [2024-11-18 13:26:25.217338] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:29.327 [2024-11-18 13:26:25.217966] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:29.327 [2024-11-18 13:26:25.217996] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:29.327 [2024-11-18 13:26:25.218811] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:29.327 00:13:29.327 [2024-11-18 13:26:25.218853] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:29.327 00:13:29.327 real 0m0.526s 00:13:29.327 user 0m0.279s 00:13:29.327 sys 0m0.137s 00:13:29.327 13:26:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.327 ************************************ 00:13:29.327 END TEST bdev_hello_world 00:13:29.327 ************************************ 00:13:29.327 13:26:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:29.327 13:26:25 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:29.327 13:26:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.327 13:26:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.327 13:26:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.327 ************************************ 00:13:29.327 START TEST bdev_bounds 00:13:29.327 ************************************ 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81301 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:29.327 Process bdevio pid: 81301 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81301' 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81301 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 81301 ']' 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.327 13:26:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:29.587 [2024-11-18 13:26:25.475529] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:29.588 [2024-11-18 13:26:25.475647] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81301 ] 00:13:29.588 [2024-11-18 13:26:25.633380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.588 [2024-11-18 13:26:25.654666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.588 [2024-11-18 13:26:25.654950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.588 [2024-11-18 13:26:25.654992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.534 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.534 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:30.534 13:26:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:30.534 I/O targets: 00:13:30.534 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:30.534 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:30.534 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:30.534 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:30.534 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:30.534 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:30.534 00:13:30.534 00:13:30.534 CUnit - A unit testing framework for C - Version 2.1-3 00:13:30.534 http://cunit.sourceforge.net/ 00:13:30.534 00:13:30.534 00:13:30.534 Suite: bdevio tests on: nvme3n1 00:13:30.534 Test: blockdev write read block ...passed 00:13:30.534 Test: blockdev write zeroes read block ...passed 00:13:30.534 Test: blockdev write zeroes read no split ...passed 00:13:30.534 Test: blockdev write zeroes read split ...passed 00:13:30.534 Test: blockdev write zeroes read split partial ...passed 00:13:30.534 Test: blockdev reset ...passed 00:13:30.534 Test: blockdev write read 8 blocks ...passed 00:13:30.534 Test: blockdev write read size > 128k ...passed 00:13:30.534 Test: blockdev write read invalid size ...passed 00:13:30.534 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.534 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.534 Test: blockdev write read max offset ...passed 00:13:30.534 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.534 Test: blockdev writev readv 8 blocks ...passed 00:13:30.534 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.534 Test: blockdev writev readv block ...passed 00:13:30.534 Test: blockdev writev readv size > 128k ...passed 00:13:30.534 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.535 Test: blockdev comparev and writev ...passed 00:13:30.535 Test: blockdev nvme passthru rw ...passed 00:13:30.535 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.535 Test: blockdev nvme admin passthru ...passed 00:13:30.535 Test: blockdev copy ...passed 00:13:30.535 Suite: bdevio tests on: nvme2n3 00:13:30.535 Test: blockdev write read block ...passed 00:13:30.535 Test: blockdev write zeroes read block ...passed 00:13:30.535 Test: blockdev write zeroes read no split ...passed 00:13:30.535 Test: blockdev write zeroes read split ...passed 00:13:30.535 Test: blockdev write zeroes read split partial ...passed 00:13:30.535 Test: blockdev reset ...passed 00:13:30.535 Test: blockdev write read 8 blocks ...passed 00:13:30.535 Test: blockdev write read size > 128k ...passed 00:13:30.535 Test: blockdev write read invalid size ...passed 00:13:30.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.535 Test: blockdev write read max offset ...passed 00:13:30.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.535 Test: blockdev writev readv 8 blocks ...passed 00:13:30.535 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.535 Test: blockdev writev readv block ...passed 00:13:30.535 Test: blockdev writev readv size > 128k ...passed 00:13:30.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.535 Test: blockdev comparev and writev ...passed 00:13:30.535 Test: blockdev nvme passthru rw ...passed 00:13:30.535 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.535 Test: blockdev nvme admin passthru ...passed 00:13:30.535 Test: blockdev copy ...passed 00:13:30.535 Suite: bdevio tests on: nvme2n2 00:13:30.535 Test: blockdev write read block ...passed 00:13:30.535 Test: blockdev write zeroes read block ...passed 00:13:30.535 Test: blockdev write zeroes read no split ...passed 00:13:30.535 Test: blockdev write zeroes read split ...passed 00:13:30.535 Test: blockdev write zeroes read split partial ...passed 00:13:30.535 Test: blockdev reset ...passed 00:13:30.535 Test: blockdev write read 8 blocks ...passed 00:13:30.535 Test: blockdev write read size > 128k ...passed 00:13:30.535 Test: blockdev write read invalid size ...passed 00:13:30.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.535 Test: blockdev write read max offset ...passed 00:13:30.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.535 Test: blockdev writev readv 8 blocks ...passed 00:13:30.535 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.535 Test: blockdev writev readv block ...passed 00:13:30.535 Test: blockdev writev readv size > 128k ...passed 00:13:30.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.535 Test: blockdev comparev and writev ...passed 00:13:30.535 Test: blockdev nvme passthru rw ...passed 00:13:30.535 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.535 Test: blockdev nvme admin passthru ...passed 00:13:30.535 Test: blockdev copy ...passed 00:13:30.535 Suite: bdevio tests on: nvme2n1 00:13:30.535 Test: blockdev write read block ...passed 00:13:30.535 Test: blockdev write zeroes read block ...passed 00:13:30.535 Test: blockdev write zeroes read no split ...passed 00:13:30.535 Test: blockdev write zeroes read split ...passed 00:13:30.535 Test: blockdev write zeroes read split partial ...passed 00:13:30.535 Test: blockdev reset ...passed 00:13:30.535 Test: blockdev write read 8 blocks ...passed 00:13:30.535 Test: blockdev write read size > 128k ...passed 00:13:30.535 Test: blockdev write read invalid size ...passed 00:13:30.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.535 Test: blockdev write read max offset ...passed 00:13:30.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.535 Test: blockdev writev readv 8 blocks ...passed 00:13:30.535 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.535 Test: blockdev writev readv block ...passed 00:13:30.535 Test: blockdev writev readv size > 128k ...passed 00:13:30.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.535 Test: blockdev comparev and writev ...passed 00:13:30.535 Test: blockdev nvme passthru rw ...passed 00:13:30.535 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.535 Test: blockdev nvme admin passthru ...passed 00:13:30.535 Test: blockdev copy ...passed 00:13:30.535 Suite: bdevio tests on: nvme1n1 00:13:30.535 Test: blockdev write read block ...passed 00:13:30.535 Test: blockdev write zeroes read block ...passed 00:13:30.535 Test: blockdev write zeroes read no split ...passed 00:13:30.535 Test: blockdev write zeroes read split ...passed 00:13:30.535 Test: blockdev write zeroes read split partial ...passed 00:13:30.535 Test: blockdev reset ...passed 00:13:30.535 Test: blockdev write read 8 blocks ...passed 00:13:30.535 Test: blockdev write read size > 128k ...passed 00:13:30.535 Test: blockdev write read invalid size ...passed 00:13:30.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.535 Test: blockdev write read max offset ...passed 00:13:30.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.535 Test: blockdev writev readv 8 blocks ...passed 00:13:30.535 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.535 Test: blockdev writev readv block ...passed 00:13:30.535 Test: blockdev writev readv size > 128k ...passed 00:13:30.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.535 Test: blockdev comparev and writev ...passed 00:13:30.535 Test: blockdev nvme passthru rw ...passed 00:13:30.535 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.535 Test: blockdev nvme admin passthru ...passed 00:13:30.535 Test: blockdev copy ...passed 00:13:30.535 Suite: bdevio tests on: nvme0n1 00:13:30.535 Test: blockdev write read block ...passed 00:13:30.535 Test: blockdev write zeroes read block ...passed 00:13:30.535 Test: blockdev write zeroes read no split ...passed 00:13:30.535 Test: blockdev write zeroes read split ...passed 00:13:30.535 Test: blockdev write zeroes read split partial ...passed 00:13:30.535 Test: blockdev reset ...passed 00:13:30.535 Test: blockdev write read 8 blocks ...passed 00:13:30.535 Test: blockdev write read size > 128k ...passed 00:13:30.535 Test: blockdev write read invalid size ...passed 00:13:30.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.535 Test: blockdev write read max offset ...passed 00:13:30.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.535 Test: blockdev writev readv 8 blocks ...passed 00:13:30.535 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.535 Test: blockdev writev readv block ...passed 00:13:30.535 Test: blockdev writev readv size > 128k ...passed 00:13:30.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.535 Test: blockdev comparev and writev ...passed 00:13:30.535 Test: blockdev nvme passthru rw ...passed 00:13:30.535 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.535 Test: blockdev nvme admin passthru ...passed 00:13:30.535 Test: blockdev copy ...passed 00:13:30.535 00:13:30.535 Run Summary: Type Total Ran Passed Failed Inactive 00:13:30.535 suites 6 6 n/a 0 0 00:13:30.535 tests 138 138 138 0 0 00:13:30.535 asserts 780 780 780 0 n/a 00:13:30.535 00:13:30.535 Elapsed time = 0.463 seconds 00:13:30.535 0 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81301 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 81301 ']' 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 81301 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81301 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.535 killing process with pid 81301 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81301' 00:13:30.535 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 81301 00:13:30.536 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 81301 00:13:30.798 13:26:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:30.798 00:13:30.798 real 0m1.378s 00:13:30.798 user 0m3.433s 00:13:30.798 sys 0m0.255s 00:13:30.798 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.798 ************************************ 00:13:30.798 END TEST bdev_bounds 00:13:30.798 ************************************ 00:13:30.798 13:26:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:30.798 13:26:26 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:30.798 13:26:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:30.798 13:26:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.798 13:26:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.798 ************************************ 00:13:30.798 START TEST bdev_nbd 00:13:30.798 ************************************ 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81357 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81357 /var/tmp/spdk-nbd.sock 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 81357 ']' 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:30.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:30.798 13:26:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:30.798 [2024-11-18 13:26:26.890311] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:30.799 [2024-11-18 13:26:26.890402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.060 [2024-11-18 13:26:27.046250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.060 [2024-11-18 13:26:27.065261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.636 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.636 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:31.636 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:31.636 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.636 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:31.636 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:31.636 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:31.898 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.899 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.899 13:26:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.899 1+0 records in 00:13:31.899 1+0 records out 00:13:31.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119435 s, 3.4 MB/s 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:31.899 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.160 1+0 records in 00:13:32.160 1+0 records out 00:13:32.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00157815 s, 2.6 MB/s 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:32.160 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.422 1+0 records in 00:13:32.422 1+0 records out 00:13:32.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145192 s, 2.8 MB/s 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:32.422 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.685 1+0 records in 00:13:32.685 1+0 records out 00:13:32.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109203 s, 3.8 MB/s 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:32.685 13:26:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.947 1+0 records in 00:13:32.947 1+0 records out 00:13:32.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714007 s, 5.7 MB/s 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:32.947 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.206 1+0 records in 00:13:33.206 1+0 records out 00:13:33.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00455977 s, 898 kB/s 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.206 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd0", 00:13:33.468 "bdev_name": "nvme0n1" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd1", 00:13:33.468 "bdev_name": "nvme1n1" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd2", 00:13:33.468 "bdev_name": "nvme2n1" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd3", 00:13:33.468 "bdev_name": "nvme2n2" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd4", 00:13:33.468 "bdev_name": "nvme2n3" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd5", 00:13:33.468 "bdev_name": "nvme3n1" 00:13:33.468 } 00:13:33.468 ]' 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd0", 00:13:33.468 "bdev_name": "nvme0n1" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd1", 00:13:33.468 "bdev_name": "nvme1n1" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd2", 00:13:33.468 "bdev_name": "nvme2n1" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd3", 00:13:33.468 "bdev_name": "nvme2n2" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd4", 00:13:33.468 "bdev_name": "nvme2n3" 00:13:33.468 }, 00:13:33.468 { 00:13:33.468 "nbd_device": "/dev/nbd5", 00:13:33.468 "bdev_name": "nvme3n1" 00:13:33.468 } 00:13:33.468 ]' 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:33.468 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.469 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.731 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.993 13:26:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:34.263 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:34.263 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.264 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.526 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.789 13:26:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.051 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:35.312 /dev/nbd0 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.312 1+0 records in 00:13:35.312 1+0 records out 00:13:35.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100808 s, 4.1 MB/s 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.312 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:35.573 /dev/nbd1 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.573 1+0 records in 00:13:35.573 1+0 records out 00:13:35.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536445 s, 7.6 MB/s 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.573 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:35.834 /dev/nbd10 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.834 1+0 records in 00:13:35.834 1+0 records out 00:13:35.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137922 s, 3.0 MB/s 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.834 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:36.096 /dev/nbd11 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.096 1+0 records in 00:13:36.096 1+0 records out 00:13:36.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877822 s, 4.7 MB/s 00:13:36.096 13:26:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:36.096 /dev/nbd12 00:13:36.096 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.357 1+0 records in 00:13:36.357 1+0 records out 00:13:36.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977694 s, 4.2 MB/s 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:36.357 /dev/nbd13 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.357 1+0 records in 00:13:36.357 1+0 records out 00:13:36.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108907 s, 3.8 MB/s 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:36.357 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.358 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd0", 00:13:36.619 "bdev_name": "nvme0n1" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd1", 00:13:36.619 "bdev_name": "nvme1n1" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd10", 00:13:36.619 "bdev_name": "nvme2n1" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd11", 00:13:36.619 "bdev_name": "nvme2n2" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd12", 00:13:36.619 "bdev_name": "nvme2n3" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd13", 00:13:36.619 "bdev_name": "nvme3n1" 00:13:36.619 } 00:13:36.619 ]' 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd0", 00:13:36.619 "bdev_name": "nvme0n1" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd1", 00:13:36.619 "bdev_name": "nvme1n1" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd10", 00:13:36.619 "bdev_name": "nvme2n1" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd11", 00:13:36.619 "bdev_name": "nvme2n2" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd12", 00:13:36.619 "bdev_name": "nvme2n3" 00:13:36.619 }, 00:13:36.619 { 00:13:36.619 "nbd_device": "/dev/nbd13", 00:13:36.619 "bdev_name": "nvme3n1" 00:13:36.619 } 00:13:36.619 ]' 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:36.619 /dev/nbd1 00:13:36.619 /dev/nbd10 00:13:36.619 /dev/nbd11 00:13:36.619 /dev/nbd12 00:13:36.619 /dev/nbd13' 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:36.619 /dev/nbd1 00:13:36.619 /dev/nbd10 00:13:36.619 /dev/nbd11 00:13:36.619 /dev/nbd12 00:13:36.619 /dev/nbd13' 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:36.619 256+0 records in 00:13:36.619 256+0 records out 00:13:36.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00891205 s, 118 MB/s 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.619 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:36.880 256+0 records in 00:13:36.880 256+0 records out 00:13:36.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216425 s, 4.8 MB/s 00:13:36.880 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.880 13:26:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:37.141 256+0 records in 00:13:37.141 256+0 records out 00:13:37.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.274229 s, 3.8 MB/s 00:13:37.141 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.141 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:37.403 256+0 records in 00:13:37.403 256+0 records out 00:13:37.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204701 s, 5.1 MB/s 00:13:37.403 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.403 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:37.664 256+0 records in 00:13:37.664 256+0 records out 00:13:37.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179423 s, 5.8 MB/s 00:13:37.664 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.664 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:37.664 256+0 records in 00:13:37.664 256+0 records out 00:13:37.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0998413 s, 10.5 MB/s 00:13:37.664 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.664 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:37.925 256+0 records in 00:13:37.925 256+0 records out 00:13:37.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175855 s, 6.0 MB/s 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.925 13:26:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.186 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:38.447 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.447 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.447 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.447 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.447 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.447 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.448 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.448 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.448 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.448 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.707 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.966 13:26:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.225 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:39.483 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:39.741 malloc_lvol_verify 00:13:39.741 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:39.741 b153f2e5-09cb-4dfd-8cef-0758e7fc42f2 00:13:39.999 13:26:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:39.999 952fc6ab-155a-40aa-b534-cb993be60fa2 00:13:39.999 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:40.266 /dev/nbd0 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:40.266 mke2fs 1.47.0 (5-Feb-2023) 00:13:40.266 Discarding device blocks: 0/4096 done 00:13:40.266 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:40.266 00:13:40.266 Allocating group tables: 0/1 done 00:13:40.266 Writing inode tables: 0/1 done 00:13:40.266 Creating journal (1024 blocks): done 00:13:40.266 Writing superblocks and filesystem accounting information: 0/1 done 00:13:40.266 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.266 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:40.535 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.535 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.535 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.535 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.535 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.535 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.535 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81357 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 81357 ']' 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 81357 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81357 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81357' 00:13:40.536 killing process with pid 81357 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 81357 00:13:40.536 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 81357 00:13:40.796 13:26:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:40.796 00:13:40.796 real 0m9.844s 00:13:40.796 user 0m13.699s 00:13:40.796 sys 0m3.468s 00:13:40.796 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.796 ************************************ 00:13:40.796 END TEST bdev_nbd 00:13:40.796 ************************************ 00:13:40.796 13:26:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 13:26:36 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:40.796 13:26:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:40.796 13:26:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:40.796 13:26:36 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:40.796 13:26:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:40.796 13:26:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.796 13:26:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 ************************************ 00:13:40.796 START TEST bdev_fio 00:13:40.796 ************************************ 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:40.796 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:40.796 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:40.797 ************************************ 00:13:40.797 START TEST bdev_fio_rw_verify 00:13:40.797 ************************************ 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:40.797 13:26:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.058 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.058 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.058 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.058 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.058 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.058 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.058 fio-3.35 00:13:41.058 Starting 6 threads 00:13:53.288 00:13:53.288 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81747: Mon Nov 18 13:26:47 2024 00:13:53.288 read: IOPS=19.1k, BW=74.5MiB/s (78.1MB/s)(745MiB/10003msec) 00:13:53.288 slat (usec): min=2, max=2743, avg= 5.27, stdev=16.72 00:13:53.288 clat (usec): min=74, max=9959, avg=1014.52, stdev=825.59 00:13:53.288 lat (usec): min=78, max=9971, avg=1019.80, stdev=826.27 00:13:53.288 clat percentiles (usec): 00:13:53.288 | 50.000th=[ 742], 99.000th=[ 3621], 99.900th=[ 5014], 99.990th=[ 8225], 00:13:53.288 | 99.999th=[ 9896] 00:13:53.288 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(760MiB/10003msec); 0 zone resets 00:13:53.288 slat (usec): min=4, max=4338, avg=34.08, stdev=127.81 00:13:53.288 clat (usec): min=75, max=8640, avg=1191.43, stdev=918.24 00:13:53.289 lat (usec): min=100, max=8677, avg=1225.51, stdev=934.02 00:13:53.289 clat percentiles (usec): 00:13:53.289 | 50.000th=[ 906], 99.000th=[ 4047], 99.900th=[ 5276], 99.990th=[ 7504], 00:13:53.289 | 99.999th=[ 8586] 00:13:53.289 bw ( KiB/s): min=48916, max=143445, per=100.00%, avg=79338.84, stdev=4584.00, samples=114 00:13:53.289 iops : min=12226, max=35860, avg=19833.89, stdev=1146.04, samples=114 00:13:53.289 lat (usec) : 100=0.05%, 250=9.05%, 500=23.32%, 750=14.70%, 1000=9.19% 00:13:53.289 lat (msec) : 2=28.43%, 4=14.43%, 10=0.82% 00:13:53.289 cpu : usr=43.64%, sys=31.16%, ctx=6304, majf=0, minf=17981 00:13:53.289 IO depths : 1=11.6%, 2=24.0%, 4=50.9%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.289 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.289 issued rwts: total=190819,194640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:53.289 00:13:53.289 Run status group 0 (all jobs): 00:13:53.289 READ: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=745MiB (782MB), run=10003-10003msec 00:13:53.289 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=760MiB (797MB), run=10003-10003msec 00:13:53.289 ----------------------------------------------------- 00:13:53.289 Suppressions used: 00:13:53.289 count bytes template 00:13:53.289 6 48 /usr/src/fio/parse.c 00:13:53.289 3686 353856 /usr/src/fio/iolog.c 00:13:53.289 1 8 libtcmalloc_minimal.so 00:13:53.289 1 904 libcrypto.so 00:13:53.289 ----------------------------------------------------- 00:13:53.289 00:13:53.289 00:13:53.289 real 0m11.068s 00:13:53.289 user 0m26.854s 00:13:53.289 sys 0m18.958s 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.289 ************************************ 00:13:53.289 END TEST bdev_fio_rw_verify 00:13:53.289 ************************************ 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a7a872ff-89aa-4108-9d6a-225210bdcd32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a7a872ff-89aa-4108-9d6a-225210bdcd32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "26a7e4e1-48f0-45d4-ac29-582941ecdd5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "26a7e4e1-48f0-45d4-ac29-582941ecdd5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "aac4586a-e02c-414c-b9e2-ec16c4ae6187"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aac4586a-e02c-414c-b9e2-ec16c4ae6187",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "527324ea-b24b-452c-95d4-347409f965c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "527324ea-b24b-452c-95d4-347409f965c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ca474b3d-9952-40da-8128-ce7df007f755"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca474b3d-9952-40da-8128-ce7df007f755",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "01fa23b7-925d-43b6-a150-6889b85bee1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "01fa23b7-925d-43b6-a150-6889b85bee1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.289 /home/vagrant/spdk_repo/spdk 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:53.289 00:13:53.289 real 0m11.231s 00:13:53.289 user 0m26.920s 00:13:53.289 sys 0m19.035s 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.289 ************************************ 00:13:53.289 END TEST bdev_fio 00:13:53.289 ************************************ 00:13:53.289 13:26:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:53.289 13:26:48 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:53.289 13:26:48 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:53.289 13:26:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:53.289 13:26:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.289 13:26:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.289 ************************************ 00:13:53.289 START TEST bdev_verify 00:13:53.289 ************************************ 00:13:53.289 13:26:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:53.289 [2024-11-18 13:26:48.097775] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:53.289 [2024-11-18 13:26:48.097882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81912 ] 00:13:53.289 [2024-11-18 13:26:48.257282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.289 [2024-11-18 13:26:48.277551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.289 [2024-11-18 13:26:48.277585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.289 Running I/O for 5 seconds... 00:13:54.935 20608.00 IOPS, 80.50 MiB/s [2024-11-18T13:26:52.008Z] 21632.00 IOPS, 84.50 MiB/s [2024-11-18T13:26:52.954Z] 22037.33 IOPS, 86.08 MiB/s [2024-11-18T13:26:53.898Z] 21824.00 IOPS, 85.25 MiB/s [2024-11-18T13:26:53.898Z] 21900.80 IOPS, 85.55 MiB/s 00:13:57.770 Latency(us) 00:13:57.770 [2024-11-18T13:26:53.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.770 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x0 length 0xa0000 00:13:57.770 nvme0n1 : 5.07 1791.86 7.00 0.00 0.00 71261.70 7057.72 84692.68 00:13:57.770 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0xa0000 length 0xa0000 00:13:57.770 nvme0n1 : 5.04 1651.91 6.45 0.00 0.00 77317.05 11090.71 85095.98 00:13:57.770 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x0 length 0xbd0bd 00:13:57.770 nvme1n1 : 5.05 2110.85 8.25 0.00 0.00 60315.61 5923.45 77433.30 00:13:57.770 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:57.770 nvme1n1 : 5.07 2122.55 8.29 0.00 0.00 59934.62 5923.45 68157.44 00:13:57.770 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x0 length 0x80000 00:13:57.770 nvme2n1 : 5.08 1839.64 7.19 0.00 0.00 68825.20 6704.84 70577.23 00:13:57.770 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x80000 length 0x80000 00:13:57.770 nvme2n1 : 5.04 1753.04 6.85 0.00 0.00 72522.15 5973.86 70577.23 00:13:57.770 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x0 length 0x80000 00:13:57.770 nvme2n2 : 5.07 1793.86 7.01 0.00 0.00 70387.29 6503.19 70980.53 00:13:57.770 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x80000 length 0x80000 00:13:57.770 nvme2n2 : 5.07 1690.72 6.60 0.00 0.00 74928.09 4259.84 72593.72 00:13:57.770 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x0 length 0x80000 00:13:57.770 nvme2n3 : 5.09 1811.61 7.08 0.00 0.00 69521.26 4285.05 75820.11 00:13:57.770 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x80000 length 0x80000 00:13:57.770 nvme2n3 : 5.08 1689.61 6.60 0.00 0.00 74773.11 5721.80 72593.72 00:13:57.770 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x0 length 0x20000 00:13:57.770 nvme3n1 : 5.09 1810.31 7.07 0.00 0.00 69486.40 5494.94 77030.01 00:13:57.770 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.770 Verification LBA range: start 0x20000 length 0x20000 00:13:57.770 nvme3n1 : 5.07 1666.09 6.51 0.00 0.00 75637.16 4839.58 80256.39 00:13:57.770 [2024-11-18T13:26:53.898Z] =================================================================================================================== 00:13:57.770 [2024-11-18T13:26:53.898Z] Total : 21732.06 84.89 0.00 0.00 69976.54 4259.84 85095.98 00:13:57.770 00:13:57.770 real 0m5.711s 00:13:57.770 user 0m9.597s 00:13:57.770 sys 0m1.071s 00:13:57.770 13:26:53 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.770 ************************************ 00:13:57.770 END TEST bdev_verify 00:13:57.770 ************************************ 00:13:57.770 13:26:53 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:57.770 13:26:53 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:57.770 13:26:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:57.770 13:26:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.770 13:26:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.770 ************************************ 00:13:57.770 START TEST bdev_verify_big_io 00:13:57.770 ************************************ 00:13:57.770 13:26:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:57.770 [2024-11-18 13:26:53.861505] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:13:57.770 [2024-11-18 13:26:53.861610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82004 ] 00:13:58.032 [2024-11-18 13:26:54.018838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:58.032 [2024-11-18 13:26:54.039103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.032 [2024-11-18 13:26:54.039220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.294 Running I/O for 5 seconds... 00:14:03.888 1760.00 IOPS, 110.00 MiB/s [2024-11-18T13:27:00.588Z] 2096.00 IOPS, 131.00 MiB/s [2024-11-18T13:27:00.588Z] 2686.00 IOPS, 167.88 MiB/s 00:14:04.460 Latency(us) 00:14:04.460 [2024-11-18T13:27:00.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.460 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x0 length 0xa000 00:14:04.460 nvme0n1 : 5.92 101.05 6.32 0.00 0.00 1214033.07 50210.66 2271376.94 00:14:04.460 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0xa000 length 0xa000 00:14:04.460 nvme0n1 : 6.12 94.21 5.89 0.00 0.00 1305069.17 5318.50 1451874.46 00:14:04.460 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x0 length 0xbd0b 00:14:04.460 nvme1n1 : 5.98 112.33 7.02 0.00 0.00 1064579.47 8721.33 1129235.69 00:14:04.460 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:04.460 nvme1n1 : 6.13 122.75 7.67 0.00 0.00 952514.28 29844.09 1180857.90 00:14:04.460 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x0 length 0x8000 00:14:04.460 nvme2n1 : 5.92 83.74 5.23 0.00 0.00 1363246.37 206488.81 2400432.44 00:14:04.460 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x8000 length 0x8000 00:14:04.460 nvme2n1 : 6.13 93.99 5.87 0.00 0.00 1213301.72 54445.29 1626099.40 00:14:04.460 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x0 length 0x8000 00:14:04.460 nvme2n2 : 5.98 96.34 6.02 0.00 0.00 1164346.11 54041.99 1858399.31 00:14:04.460 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x8000 length 0x8000 00:14:04.460 nvme2n2 : 6.15 124.98 7.81 0.00 0.00 896904.53 11998.13 832408.02 00:14:04.460 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x0 length 0x8000 00:14:04.460 nvme2n3 : 6.13 99.21 6.20 0.00 0.00 1075633.48 53235.40 2929560.02 00:14:04.460 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x8000 length 0x8000 00:14:04.460 nvme2n3 : 6.15 101.52 6.34 0.00 0.00 1061498.76 11897.30 1729343.80 00:14:04.460 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x0 length 0x2000 00:14:04.460 nvme3n1 : 6.14 182.35 11.40 0.00 0.00 575795.67 1140.58 1084066.26 00:14:04.460 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.460 Verification LBA range: start 0x2000 length 0x2000 00:14:04.460 nvme3n1 : 6.15 135.76 8.48 0.00 0.00 764116.55 2923.91 1819682.66 00:14:04.460 [2024-11-18T13:27:00.588Z] =================================================================================================================== 00:14:04.460 [2024-11-18T13:27:00.588Z] Total : 1348.22 84.26 0.00 0.00 1005183.09 1140.58 2929560.02 00:14:04.721 00:14:04.721 real 0m6.808s 00:14:04.721 user 0m12.740s 00:14:04.721 sys 0m0.301s 00:14:04.721 13:27:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.721 ************************************ 00:14:04.721 END TEST bdev_verify_big_io 00:14:04.721 ************************************ 00:14:04.721 13:27:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.721 13:27:00 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.721 13:27:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:04.721 13:27:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.721 13:27:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.721 ************************************ 00:14:04.721 START TEST bdev_write_zeroes 00:14:04.721 ************************************ 00:14:04.721 13:27:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.721 [2024-11-18 13:27:00.732686] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:04.721 [2024-11-18 13:27:00.732801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82104 ] 00:14:04.982 [2024-11-18 13:27:00.890330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.982 [2024-11-18 13:27:00.909601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.243 Running I/O for 1 seconds... 00:14:06.186 97790.00 IOPS, 381.99 MiB/s 00:14:06.186 Latency(us) 00:14:06.186 [2024-11-18T13:27:02.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.186 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.186 nvme0n1 : 1.01 15926.67 62.21 0.00 0.00 8028.15 5091.64 23895.43 00:14:06.186 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.186 nvme1n1 : 1.02 17595.60 68.73 0.00 0.00 7261.20 4461.49 15325.34 00:14:06.186 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.186 nvme2n1 : 1.02 15908.34 62.14 0.00 0.00 7987.67 4133.81 20769.87 00:14:06.186 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.186 nvme2n2 : 1.01 15893.16 62.08 0.00 0.00 7988.81 4234.63 22383.06 00:14:06.186 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.186 nvme2n3 : 1.02 15875.09 62.01 0.00 0.00 7991.47 4411.08 23895.43 00:14:06.186 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.186 nvme3n1 : 1.02 15982.79 62.43 0.00 0.00 7932.48 3327.21 25407.80 00:14:06.186 [2024-11-18T13:27:02.314Z] =================================================================================================================== 00:14:06.186 [2024-11-18T13:27:02.314Z] Total : 97181.65 379.62 0.00 0.00 7854.27 3327.21 25407.80 00:14:06.186 00:14:06.186 real 0m1.614s 00:14:06.186 user 0m1.019s 00:14:06.186 sys 0m0.401s 00:14:06.186 13:27:02 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.186 ************************************ 00:14:06.186 END TEST bdev_write_zeroes 00:14:06.186 13:27:02 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:06.186 ************************************ 00:14:06.447 13:27:02 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.447 13:27:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:06.447 13:27:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.447 13:27:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.447 ************************************ 00:14:06.447 START TEST bdev_json_nonenclosed 00:14:06.447 ************************************ 00:14:06.448 13:27:02 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.448 [2024-11-18 13:27:02.411007] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:06.448 [2024-11-18 13:27:02.411116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82135 ] 00:14:06.448 [2024-11-18 13:27:02.568440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.709 [2024-11-18 13:27:02.587548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.709 [2024-11-18 13:27:02.587623] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:06.709 [2024-11-18 13:27:02.587638] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:06.709 [2024-11-18 13:27:02.587648] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:06.709 00:14:06.709 real 0m0.300s 00:14:06.709 user 0m0.123s 00:14:06.709 sys 0m0.074s 00:14:06.709 ************************************ 00:14:06.709 END TEST bdev_json_nonenclosed 00:14:06.709 ************************************ 00:14:06.709 13:27:02 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.709 13:27:02 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:06.709 13:27:02 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.709 13:27:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:06.709 13:27:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.709 13:27:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.709 ************************************ 00:14:06.709 START TEST bdev_json_nonarray 00:14:06.709 ************************************ 00:14:06.709 13:27:02 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.709 [2024-11-18 13:27:02.771565] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:06.709 [2024-11-18 13:27:02.771681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82166 ] 00:14:06.970 [2024-11-18 13:27:02.929324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.970 [2024-11-18 13:27:02.947992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.970 [2024-11-18 13:27:02.948085] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:06.970 [2024-11-18 13:27:02.948100] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:06.970 [2024-11-18 13:27:02.948111] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:06.970 00:14:06.970 real 0m0.297s 00:14:06.970 user 0m0.117s 00:14:06.970 sys 0m0.077s 00:14:06.970 13:27:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.970 ************************************ 00:14:06.970 END TEST bdev_json_nonarray 00:14:06.970 ************************************ 00:14:06.970 13:27:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:06.970 13:27:03 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:07.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.746 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.746 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.746 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.746 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.746 00:14:11.746 real 0m50.033s 00:14:11.746 user 1m15.782s 00:14:11.746 sys 0m36.448s 00:14:11.746 13:27:07 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.746 ************************************ 00:14:11.746 END TEST blockdev_xnvme 00:14:11.746 ************************************ 00:14:11.746 13:27:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.746 13:27:07 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:11.746 13:27:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.746 13:27:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.746 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:14:11.746 ************************************ 00:14:11.746 START TEST ublk 00:14:11.746 ************************************ 00:14:11.746 13:27:07 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:11.746 * Looking for test storage... 00:14:11.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:11.746 13:27:07 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:11.746 13:27:07 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:14:11.746 13:27:07 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.007 13:27:07 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.007 13:27:07 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.007 13:27:07 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.007 13:27:07 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.007 13:27:07 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.007 13:27:07 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.007 13:27:07 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.007 13:27:07 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:12.007 13:27:07 ublk -- scripts/common.sh@345 -- # : 1 00:14:12.007 13:27:07 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.007 13:27:07 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.007 13:27:07 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:12.007 13:27:07 ublk -- scripts/common.sh@353 -- # local d=1 00:14:12.007 13:27:07 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.007 13:27:07 ublk -- scripts/common.sh@355 -- # echo 1 00:14:12.007 13:27:07 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.007 13:27:07 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@353 -- # local d=2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.007 13:27:07 ublk -- scripts/common.sh@355 -- # echo 2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.007 13:27:07 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.007 13:27:07 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.007 13:27:07 ublk -- scripts/common.sh@368 -- # return 0 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:12.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.007 --rc genhtml_branch_coverage=1 00:14:12.007 --rc genhtml_function_coverage=1 00:14:12.007 --rc genhtml_legend=1 00:14:12.007 --rc geninfo_all_blocks=1 00:14:12.007 --rc geninfo_unexecuted_blocks=1 00:14:12.007 00:14:12.007 ' 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:12.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.007 --rc genhtml_branch_coverage=1 00:14:12.007 --rc genhtml_function_coverage=1 00:14:12.007 --rc genhtml_legend=1 00:14:12.007 --rc geninfo_all_blocks=1 00:14:12.007 --rc geninfo_unexecuted_blocks=1 00:14:12.007 00:14:12.007 ' 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:12.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.007 --rc genhtml_branch_coverage=1 00:14:12.007 --rc genhtml_function_coverage=1 00:14:12.007 --rc genhtml_legend=1 00:14:12.007 --rc geninfo_all_blocks=1 00:14:12.007 --rc geninfo_unexecuted_blocks=1 00:14:12.007 00:14:12.007 ' 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:12.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.007 --rc genhtml_branch_coverage=1 00:14:12.007 --rc genhtml_function_coverage=1 00:14:12.007 --rc genhtml_legend=1 00:14:12.007 --rc geninfo_all_blocks=1 00:14:12.007 --rc geninfo_unexecuted_blocks=1 00:14:12.007 00:14:12.007 ' 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:12.007 13:27:07 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:12.007 13:27:07 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:12.007 13:27:07 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:12.007 13:27:07 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:12.007 13:27:07 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:12.007 13:27:07 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:12.007 13:27:07 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:12.007 13:27:07 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:12.007 13:27:07 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.007 13:27:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 ************************************ 00:14:12.007 START TEST test_save_ublk_config 00:14:12.007 ************************************ 00:14:12.007 13:27:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:14:12.007 13:27:07 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82451 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82451 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82451 ']' 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.008 13:27:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:12.008 [2024-11-18 13:27:07.994178] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:12.008 [2024-11-18 13:27:07.994294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82451 ] 00:14:12.269 [2024-11-18 13:27:08.149043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.269 [2024-11-18 13:27:08.168013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:12.841 [2024-11-18 13:27:08.833184] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:12.841 [2024-11-18 13:27:08.833802] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:12.841 malloc0 00:14:12.841 [2024-11-18 13:27:08.857298] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:12.841 [2024-11-18 13:27:08.857371] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:12.841 [2024-11-18 13:27:08.857378] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:12.841 [2024-11-18 13:27:08.857391] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:12.841 [2024-11-18 13:27:08.866251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:12.841 [2024-11-18 13:27:08.866303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:12.841 [2024-11-18 13:27:08.873193] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:12.841 [2024-11-18 13:27:08.873289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:12.841 [2024-11-18 13:27:08.890186] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:12.841 0 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.841 13:27:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:13.103 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.103 13:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:13.103 "subsystems": [ 00:14:13.103 { 00:14:13.103 "subsystem": "fsdev", 00:14:13.103 "config": [ 00:14:13.103 { 00:14:13.103 "method": "fsdev_set_opts", 00:14:13.103 "params": { 00:14:13.103 "fsdev_io_pool_size": 65535, 00:14:13.103 "fsdev_io_cache_size": 256 00:14:13.103 } 00:14:13.103 } 00:14:13.103 ] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "keyring", 00:14:13.103 "config": [] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "iobuf", 00:14:13.103 "config": [ 00:14:13.103 { 00:14:13.103 "method": "iobuf_set_options", 00:14:13.103 "params": { 00:14:13.103 "small_pool_count": 8192, 00:14:13.103 "large_pool_count": 1024, 00:14:13.103 "small_bufsize": 8192, 00:14:13.103 "large_bufsize": 135168, 00:14:13.103 "enable_numa": false 00:14:13.103 } 00:14:13.103 } 00:14:13.103 ] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "sock", 00:14:13.103 "config": [ 00:14:13.103 { 00:14:13.103 "method": "sock_set_default_impl", 00:14:13.103 "params": { 00:14:13.103 "impl_name": "posix" 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "sock_impl_set_options", 00:14:13.103 "params": { 00:14:13.103 "impl_name": "ssl", 00:14:13.103 "recv_buf_size": 4096, 00:14:13.103 "send_buf_size": 4096, 00:14:13.103 "enable_recv_pipe": true, 00:14:13.103 "enable_quickack": false, 00:14:13.103 "enable_placement_id": 0, 00:14:13.103 "enable_zerocopy_send_server": true, 00:14:13.103 "enable_zerocopy_send_client": false, 00:14:13.103 "zerocopy_threshold": 0, 00:14:13.103 "tls_version": 0, 00:14:13.103 "enable_ktls": false 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "sock_impl_set_options", 00:14:13.103 "params": { 00:14:13.103 "impl_name": "posix", 00:14:13.103 "recv_buf_size": 2097152, 00:14:13.103 "send_buf_size": 2097152, 00:14:13.103 "enable_recv_pipe": true, 00:14:13.103 "enable_quickack": false, 00:14:13.103 "enable_placement_id": 0, 00:14:13.103 "enable_zerocopy_send_server": true, 00:14:13.103 "enable_zerocopy_send_client": false, 00:14:13.103 "zerocopy_threshold": 0, 00:14:13.103 "tls_version": 0, 00:14:13.103 "enable_ktls": false 00:14:13.103 } 00:14:13.103 } 00:14:13.103 ] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "vmd", 00:14:13.103 "config": [] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "accel", 00:14:13.103 "config": [ 00:14:13.103 { 00:14:13.103 "method": "accel_set_options", 00:14:13.103 "params": { 00:14:13.103 "small_cache_size": 128, 00:14:13.103 "large_cache_size": 16, 00:14:13.103 "task_count": 2048, 00:14:13.103 "sequence_count": 2048, 00:14:13.103 "buf_count": 2048 00:14:13.103 } 00:14:13.103 } 00:14:13.103 ] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "bdev", 00:14:13.103 "config": [ 00:14:13.103 { 00:14:13.103 "method": "bdev_set_options", 00:14:13.103 "params": { 00:14:13.103 "bdev_io_pool_size": 65535, 00:14:13.103 "bdev_io_cache_size": 256, 00:14:13.103 "bdev_auto_examine": true, 00:14:13.103 "iobuf_small_cache_size": 128, 00:14:13.103 "iobuf_large_cache_size": 16 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "bdev_raid_set_options", 00:14:13.103 "params": { 00:14:13.103 "process_window_size_kb": 1024, 00:14:13.103 "process_max_bandwidth_mb_sec": 0 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "bdev_iscsi_set_options", 00:14:13.103 "params": { 00:14:13.103 "timeout_sec": 30 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "bdev_nvme_set_options", 00:14:13.103 "params": { 00:14:13.103 "action_on_timeout": "none", 00:14:13.103 "timeout_us": 0, 00:14:13.103 "timeout_admin_us": 0, 00:14:13.103 "keep_alive_timeout_ms": 10000, 00:14:13.103 "arbitration_burst": 0, 00:14:13.103 "low_priority_weight": 0, 00:14:13.103 "medium_priority_weight": 0, 00:14:13.103 "high_priority_weight": 0, 00:14:13.103 "nvme_adminq_poll_period_us": 10000, 00:14:13.103 "nvme_ioq_poll_period_us": 0, 00:14:13.103 "io_queue_requests": 0, 00:14:13.103 "delay_cmd_submit": true, 00:14:13.103 "transport_retry_count": 4, 00:14:13.103 "bdev_retry_count": 3, 00:14:13.103 "transport_ack_timeout": 0, 00:14:13.103 "ctrlr_loss_timeout_sec": 0, 00:14:13.103 "reconnect_delay_sec": 0, 00:14:13.103 "fast_io_fail_timeout_sec": 0, 00:14:13.103 "disable_auto_failback": false, 00:14:13.103 "generate_uuids": false, 00:14:13.103 "transport_tos": 0, 00:14:13.103 "nvme_error_stat": false, 00:14:13.103 "rdma_srq_size": 0, 00:14:13.103 "io_path_stat": false, 00:14:13.103 "allow_accel_sequence": false, 00:14:13.103 "rdma_max_cq_size": 0, 00:14:13.103 "rdma_cm_event_timeout_ms": 0, 00:14:13.103 "dhchap_digests": [ 00:14:13.103 "sha256", 00:14:13.103 "sha384", 00:14:13.103 "sha512" 00:14:13.103 ], 00:14:13.103 "dhchap_dhgroups": [ 00:14:13.103 "null", 00:14:13.103 "ffdhe2048", 00:14:13.103 "ffdhe3072", 00:14:13.103 "ffdhe4096", 00:14:13.103 "ffdhe6144", 00:14:13.103 "ffdhe8192" 00:14:13.103 ] 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "bdev_nvme_set_hotplug", 00:14:13.103 "params": { 00:14:13.103 "period_us": 100000, 00:14:13.103 "enable": false 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "bdev_malloc_create", 00:14:13.103 "params": { 00:14:13.103 "name": "malloc0", 00:14:13.103 "num_blocks": 8192, 00:14:13.103 "block_size": 4096, 00:14:13.103 "physical_block_size": 4096, 00:14:13.103 "uuid": "65c05632-150e-4120-93b6-d43329403f55", 00:14:13.103 "optimal_io_boundary": 0, 00:14:13.103 "md_size": 0, 00:14:13.103 "dif_type": 0, 00:14:13.103 "dif_is_head_of_md": false, 00:14:13.103 "dif_pi_format": 0 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "bdev_wait_for_examine" 00:14:13.103 } 00:14:13.103 ] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "scsi", 00:14:13.103 "config": null 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "scheduler", 00:14:13.103 "config": [ 00:14:13.103 { 00:14:13.103 "method": "framework_set_scheduler", 00:14:13.103 "params": { 00:14:13.103 "name": "static" 00:14:13.103 } 00:14:13.103 } 00:14:13.103 ] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "vhost_scsi", 00:14:13.103 "config": [] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "vhost_blk", 00:14:13.103 "config": [] 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "subsystem": "ublk", 00:14:13.103 "config": [ 00:14:13.103 { 00:14:13.103 "method": "ublk_create_target", 00:14:13.103 "params": { 00:14:13.103 "cpumask": "1" 00:14:13.103 } 00:14:13.103 }, 00:14:13.103 { 00:14:13.103 "method": "ublk_start_disk", 00:14:13.103 "params": { 00:14:13.103 "bdev_name": "malloc0", 00:14:13.103 "ublk_id": 0, 00:14:13.104 "num_queues": 1, 00:14:13.104 "queue_depth": 128 00:14:13.104 } 00:14:13.104 } 00:14:13.104 ] 00:14:13.104 }, 00:14:13.104 { 00:14:13.104 "subsystem": "nbd", 00:14:13.104 "config": [] 00:14:13.104 }, 00:14:13.104 { 00:14:13.104 "subsystem": "nvmf", 00:14:13.104 "config": [ 00:14:13.104 { 00:14:13.104 "method": "nvmf_set_config", 00:14:13.104 "params": { 00:14:13.104 "discovery_filter": "match_any", 00:14:13.104 "admin_cmd_passthru": { 00:14:13.104 "identify_ctrlr": false 00:14:13.104 }, 00:14:13.104 "dhchap_digests": [ 00:14:13.104 "sha256", 00:14:13.104 "sha384", 00:14:13.104 "sha512" 00:14:13.104 ], 00:14:13.104 "dhchap_dhgroups": [ 00:14:13.104 "null", 00:14:13.104 "ffdhe2048", 00:14:13.104 "ffdhe3072", 00:14:13.104 "ffdhe4096", 00:14:13.104 "ffdhe6144", 00:14:13.104 "ffdhe8192" 00:14:13.104 ] 00:14:13.104 } 00:14:13.104 }, 00:14:13.104 { 00:14:13.104 "method": "nvmf_set_max_subsystems", 00:14:13.104 "params": { 00:14:13.104 "max_subsystems": 1024 00:14:13.104 } 00:14:13.104 }, 00:14:13.104 { 00:14:13.104 "method": "nvmf_set_crdt", 00:14:13.104 "params": { 00:14:13.104 "crdt1": 0, 00:14:13.104 "crdt2": 0, 00:14:13.104 "crdt3": 0 00:14:13.104 } 00:14:13.104 } 00:14:13.104 ] 00:14:13.104 }, 00:14:13.104 { 00:14:13.104 "subsystem": "iscsi", 00:14:13.104 "config": [ 00:14:13.104 { 00:14:13.104 "method": "iscsi_set_options", 00:14:13.104 "params": { 00:14:13.104 "node_base": "iqn.2016-06.io.spdk", 00:14:13.104 "max_sessions": 128, 00:14:13.104 "max_connections_per_session": 2, 00:14:13.104 "max_queue_depth": 64, 00:14:13.104 "default_time2wait": 2, 00:14:13.104 "default_time2retain": 20, 00:14:13.104 "first_burst_length": 8192, 00:14:13.104 "immediate_data": true, 00:14:13.104 "allow_duplicated_isid": false, 00:14:13.104 "error_recovery_level": 0, 00:14:13.104 "nop_timeout": 60, 00:14:13.104 "nop_in_interval": 30, 00:14:13.104 "disable_chap": false, 00:14:13.104 "require_chap": false, 00:14:13.104 "mutual_chap": false, 00:14:13.104 "chap_group": 0, 00:14:13.104 "max_large_datain_per_connection": 64, 00:14:13.104 "max_r2t_per_connection": 4, 00:14:13.104 "pdu_pool_size": 36864, 00:14:13.104 "immediate_data_pool_size": 16384, 00:14:13.104 "data_out_pool_size": 2048 00:14:13.104 } 00:14:13.104 } 00:14:13.104 ] 00:14:13.104 } 00:14:13.104 ] 00:14:13.104 }' 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82451 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82451 ']' 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82451 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82451 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.104 killing process with pid 82451 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82451' 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82451 00:14:13.104 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82451 00:14:13.364 [2024-11-18 13:27:09.384283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:13.364 [2024-11-18 13:27:09.417198] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:13.364 [2024-11-18 13:27:09.417329] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:13.364 [2024-11-18 13:27:09.426198] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:13.364 [2024-11-18 13:27:09.426260] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:13.364 [2024-11-18 13:27:09.426273] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:13.364 [2024-11-18 13:27:09.426298] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:13.364 [2024-11-18 13:27:09.426435] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82488 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82488 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82488 ']' 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:13.626 13:27:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:13.626 "subsystems": [ 00:14:13.626 { 00:14:13.626 "subsystem": "fsdev", 00:14:13.626 "config": [ 00:14:13.626 { 00:14:13.626 "method": "fsdev_set_opts", 00:14:13.626 "params": { 00:14:13.626 "fsdev_io_pool_size": 65535, 00:14:13.626 "fsdev_io_cache_size": 256 00:14:13.626 } 00:14:13.626 } 00:14:13.626 ] 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "subsystem": "keyring", 00:14:13.626 "config": [] 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "subsystem": "iobuf", 00:14:13.626 "config": [ 00:14:13.626 { 00:14:13.626 "method": "iobuf_set_options", 00:14:13.626 "params": { 00:14:13.626 "small_pool_count": 8192, 00:14:13.626 "large_pool_count": 1024, 00:14:13.626 "small_bufsize": 8192, 00:14:13.626 "large_bufsize": 135168, 00:14:13.626 "enable_numa": false 00:14:13.626 } 00:14:13.626 } 00:14:13.626 ] 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "subsystem": "sock", 00:14:13.626 "config": [ 00:14:13.626 { 00:14:13.626 "method": "sock_set_default_impl", 00:14:13.626 "params": { 00:14:13.626 "impl_name": "posix" 00:14:13.626 } 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "method": "sock_impl_set_options", 00:14:13.626 "params": { 00:14:13.626 "impl_name": "ssl", 00:14:13.626 "recv_buf_size": 4096, 00:14:13.626 "send_buf_size": 4096, 00:14:13.626 "enable_recv_pipe": true, 00:14:13.626 "enable_quickack": false, 00:14:13.626 "enable_placement_id": 0, 00:14:13.626 "enable_zerocopy_send_server": true, 00:14:13.626 "enable_zerocopy_send_client": false, 00:14:13.626 "zerocopy_threshold": 0, 00:14:13.626 "tls_version": 0, 00:14:13.626 "enable_ktls": false 00:14:13.626 } 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "method": "sock_impl_set_options", 00:14:13.626 "params": { 00:14:13.626 "impl_name": "posix", 00:14:13.626 "recv_buf_size": 2097152, 00:14:13.626 "send_buf_size": 2097152, 00:14:13.626 "enable_recv_pipe": true, 00:14:13.626 "enable_quickack": false, 00:14:13.626 "enable_placement_id": 0, 00:14:13.626 "enable_zerocopy_send_server": true, 00:14:13.626 "enable_zerocopy_send_client": false, 00:14:13.626 "zerocopy_threshold": 0, 00:14:13.626 "tls_version": 0, 00:14:13.626 "enable_ktls": false 00:14:13.626 } 00:14:13.626 } 00:14:13.626 ] 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "subsystem": "vmd", 00:14:13.626 "config": [] 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "subsystem": "accel", 00:14:13.626 "config": [ 00:14:13.626 { 00:14:13.626 "method": "accel_set_options", 00:14:13.626 "params": { 00:14:13.626 "small_cache_size": 128, 00:14:13.626 "large_cache_size": 16, 00:14:13.626 "task_count": 2048, 00:14:13.626 "sequence_count": 2048, 00:14:13.626 "buf_count": 2048 00:14:13.626 } 00:14:13.626 } 00:14:13.626 ] 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "subsystem": "bdev", 00:14:13.626 "config": [ 00:14:13.626 { 00:14:13.626 "method": "bdev_set_options", 00:14:13.626 "params": { 00:14:13.626 "bdev_io_pool_size": 65535, 00:14:13.626 "bdev_io_cache_size": 256, 00:14:13.626 "bdev_auto_examine": true, 00:14:13.626 "iobuf_small_cache_size": 128, 00:14:13.626 "iobuf_large_cache_size": 16 00:14:13.626 } 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "method": "bdev_raid_set_options", 00:14:13.626 "params": { 00:14:13.626 "process_window_size_kb": 1024, 00:14:13.626 "process_max_bandwidth_mb_sec": 0 00:14:13.626 } 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "method": "bdev_iscsi_set_options", 00:14:13.626 "params": { 00:14:13.626 "timeout_sec": 30 00:14:13.626 } 00:14:13.626 }, 00:14:13.626 { 00:14:13.626 "method": "bdev_nvme_set_options", 00:14:13.626 "params": { 00:14:13.626 "action_on_timeout": "none", 00:14:13.626 "timeout_us": 0, 00:14:13.626 "timeout_admin_us": 0, 00:14:13.626 "keep_alive_timeout_ms": 10000, 00:14:13.626 "arbitration_burst": 0, 00:14:13.626 "low_priority_weight": 0, 00:14:13.626 "medium_priority_weight": 0, 00:14:13.626 "high_priority_weight": 0, 00:14:13.626 "nvme_adminq_poll_period_us": 10000, 00:14:13.626 "nvme_ioq_poll_period_us": 0, 00:14:13.626 "io_queue_requests": 0, 00:14:13.626 "delay_cmd_submit": true, 00:14:13.626 "transport_retry_count": 4, 00:14:13.626 "bdev_retry_count": 3, 00:14:13.626 "transport_ack_timeout": 0, 00:14:13.627 "ctrlr_loss_timeout_sec": 0, 00:14:13.627 "reconnect_delay_sec": 0, 00:14:13.627 "fast_io_fail_timeout_sec": 0, 00:14:13.627 "disable_auto_failback": false, 00:14:13.627 "generate_uuids": false, 00:14:13.627 "transport_tos": 0, 00:14:13.627 "nvme_error_stat": false, 00:14:13.627 "rdma_srq_size": 0, 00:14:13.627 "io_path_stat": false, 00:14:13.627 "allow_accel_sequence": false, 00:14:13.627 "rdma_max_cq_size": 0, 00:14:13.627 "rdma_cm_event_timeout_ms": 0, 00:14:13.627 "dhchap_digests": [ 00:14:13.627 "sha256", 00:14:13.627 "sha384", 00:14:13.627 "sha512" 00:14:13.627 ], 00:14:13.627 "dhchap_dhgroups": [ 00:14:13.627 "null", 00:14:13.627 "ffdhe2048", 00:14:13.627 "ffdhe3072", 00:14:13.627 "ffdhe4096", 00:14:13.627 "ffdhe6144", 00:14:13.627 "ffdhe8192" 00:14:13.627 ] 00:14:13.627 } 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "method": "bdev_nvme_set_hotplug", 00:14:13.627 "params": { 00:14:13.627 "period_us": 100000, 00:14:13.627 "enable": false 00:14:13.627 } 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "method": "bdev_malloc_create", 00:14:13.627 "params": { 00:14:13.627 "name": "malloc0", 00:14:13.627 "num_blocks": 8192, 00:14:13.627 "block_size": 4096, 00:14:13.627 "physical_block_size": 4096, 00:14:13.627 "uuid": "65c05632-150e-4120-93b6-d43329403f55", 00:14:13.627 "optimal_io_boundary": 0, 00:14:13.627 "md_size": 0, 00:14:13.627 "dif_type": 0, 00:14:13.627 "dif_is_head_of_md": false, 00:14:13.627 "dif_pi_format": 0 00:14:13.627 } 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "method": "bdev_wait_for_examine" 00:14:13.627 } 00:14:13.627 ] 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "scsi", 00:14:13.627 "config": null 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "scheduler", 00:14:13.627 "config": [ 00:14:13.627 { 00:14:13.627 "method": "framework_set_scheduler", 00:14:13.627 "params": { 00:14:13.627 "name": "static" 00:14:13.627 } 00:14:13.627 } 00:14:13.627 ] 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "vhost_scsi", 00:14:13.627 "config": [] 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "vhost_blk", 00:14:13.627 "config": [] 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "ublk", 00:14:13.627 "config": [ 00:14:13.627 { 00:14:13.627 "method": "ublk_create_target", 00:14:13.627 "params": { 00:14:13.627 "cpumask": "1" 00:14:13.627 } 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "method": "ublk_start_disk", 00:14:13.627 "params": { 00:14:13.627 "bdev_name": "malloc0", 00:14:13.627 "ublk_id": 0, 00:14:13.627 "num_queues": 1, 00:14:13.627 "queue_depth": 128 00:14:13.627 } 00:14:13.627 } 00:14:13.627 ] 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "nbd", 00:14:13.627 "config": [] 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "nvmf", 00:14:13.627 "config": [ 00:14:13.627 { 00:14:13.627 "method": "nvmf_set_config", 00:14:13.627 "params": { 00:14:13.627 "discovery_filter": "match_any", 00:14:13.627 "admin_cmd_passthru": { 00:14:13.627 "identify_ctrlr": false 00:14:13.627 }, 00:14:13.627 "dhchap_digests": [ 00:14:13.627 "sha256", 00:14:13.627 "sha384", 00:14:13.627 "sha512" 00:14:13.627 ], 00:14:13.627 "dhchap_dhgroups": [ 00:14:13.627 "null", 00:14:13.627 "ffdhe2048", 00:14:13.627 "ffdhe3072", 00:14:13.627 "ffdhe4096", 00:14:13.627 "ffdhe6144", 00:14:13.627 "ffdhe8192" 00:14:13.627 ] 00:14:13.627 } 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "method": "nvmf_set_max_subsystems", 00:14:13.627 "params": { 00:14:13.627 "max_subsystems": 1024 00:14:13.627 } 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "method": "nvmf_set_crdt", 00:14:13.627 "params": { 00:14:13.627 "crdt1": 0, 00:14:13.627 "crdt2": 0, 00:14:13.627 "crdt3": 0 00:14:13.627 } 00:14:13.627 } 00:14:13.627 ] 00:14:13.627 }, 00:14:13.627 { 00:14:13.627 "subsystem": "iscsi", 00:14:13.627 "config": [ 00:14:13.627 { 00:14:13.627 "method": "iscsi_set_options", 00:14:13.627 "params": { 00:14:13.627 "node_base": "iqn.2016-06.io.spdk", 00:14:13.627 "max_sessions": 128, 00:14:13.627 "max_connections_per_session": 2, 00:14:13.627 "max_queue_depth": 64, 00:14:13.627 "default_time2wait": 2, 00:14:13.627 "default_time2retain": 20, 00:14:13.627 "first_burst_length": 8192, 00:14:13.627 "immediate_data": true, 00:14:13.627 "allow_duplicated_isid": false, 00:14:13.627 "error_recovery_level": 0, 00:14:13.627 "nop_timeout": 60, 00:14:13.627 "nop_in_interval": 30, 00:14:13.627 "disable_chap": false, 00:14:13.627 "require_chap": false, 00:14:13.627 "mutual_chap": false, 00:14:13.627 "chap_group": 0, 00:14:13.627 "max_large_datain_per_connection": 64, 00:14:13.627 "max_r2t_per_connection": 4, 00:14:13.627 "pdu_pool_size": 36864, 00:14:13.627 "immediate_data_pool_size": 16384, 00:14:13.627 "data_out_pool_size": 2048 00:14:13.627 } 00:14:13.627 } 00:14:13.627 ] 00:14:13.627 } 00:14:13.627 ] 00:14:13.627 }' 00:14:13.889 [2024-11-18 13:27:09.779893] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:13.889 [2024-11-18 13:27:09.780009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82488 ] 00:14:13.889 [2024-11-18 13:27:09.928657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.889 [2024-11-18 13:27:09.947279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.150 [2024-11-18 13:27:10.268185] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:14.150 [2024-11-18 13:27:10.268456] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.412 [2024-11-18 13:27:10.276303] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:14.412 [2024-11-18 13:27:10.276377] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:14.412 [2024-11-18 13:27:10.276384] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:14.412 [2024-11-18 13:27:10.276393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.412 [2024-11-18 13:27:10.285252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.412 [2024-11-18 13:27:10.285273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.412 [2024-11-18 13:27:10.292191] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.412 [2024-11-18 13:27:10.292286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:14.412 [2024-11-18 13:27:10.309187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82488 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82488 ']' 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82488 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82488 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.674 killing process with pid 82488 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82488' 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82488 00:14:14.674 13:27:10 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82488 00:14:14.935 [2024-11-18 13:27:10.867387] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.935 [2024-11-18 13:27:10.898265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.935 [2024-11-18 13:27:10.898380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.935 [2024-11-18 13:27:10.905195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.935 [2024-11-18 13:27:10.905246] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:14.935 [2024-11-18 13:27:10.905253] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:14.935 [2024-11-18 13:27:10.905279] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:14.935 [2024-11-18 13:27:10.905412] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.196 13:27:11 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:15.196 00:14:15.196 real 0m3.287s 00:14:15.196 user 0m2.454s 00:14:15.196 sys 0m1.464s 00:14:15.196 13:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.196 ************************************ 00:14:15.196 13:27:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:15.196 END TEST test_save_ublk_config 00:14:15.196 ************************************ 00:14:15.196 13:27:11 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82534 00:14:15.196 13:27:11 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:15.196 13:27:11 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82534 00:14:15.196 13:27:11 ublk -- common/autotest_common.sh@835 -- # '[' -z 82534 ']' 00:14:15.196 13:27:11 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.196 13:27:11 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.196 13:27:11 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.196 13:27:11 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.196 13:27:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.196 13:27:11 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:15.457 [2024-11-18 13:27:11.329258] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:15.457 [2024-11-18 13:27:11.329374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82534 ] 00:14:15.457 [2024-11-18 13:27:11.487776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.457 [2024-11-18 13:27:11.507771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.457 [2024-11-18 13:27:11.507814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.113 13:27:12 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.113 13:27:12 ublk -- common/autotest_common.sh@868 -- # return 0 00:14:16.113 13:27:12 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:16.113 13:27:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:16.113 13:27:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.113 13:27:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.113 ************************************ 00:14:16.113 START TEST test_create_ublk 00:14:16.113 ************************************ 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:14:16.113 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.113 [2024-11-18 13:27:12.187185] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:16.113 [2024-11-18 13:27:12.188279] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.113 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:16.113 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.113 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:16.113 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.113 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.375 [2024-11-18 13:27:12.243318] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:16.375 [2024-11-18 13:27:12.243688] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:16.375 [2024-11-18 13:27:12.243709] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:16.375 [2024-11-18 13:27:12.243723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.375 [2024-11-18 13:27:12.252373] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.375 [2024-11-18 13:27:12.252401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.375 [2024-11-18 13:27:12.259187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.375 [2024-11-18 13:27:12.259803] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:16.375 [2024-11-18 13:27:12.299200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.375 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:16.375 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.375 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.375 13:27:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:16.375 { 00:14:16.375 "ublk_device": "/dev/ublkb0", 00:14:16.375 "id": 0, 00:14:16.375 "queue_depth": 512, 00:14:16.375 "num_queues": 4, 00:14:16.375 "bdev_name": "Malloc0" 00:14:16.375 } 00:14:16.375 ]' 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.375 13:27:12 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:16.636 fio: verification read phase will never start because write phase uses all of runtime 00:14:16.636 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:16.636 fio-3.35 00:14:16.636 Starting 1 process 00:14:26.614 00:14:26.614 fio_test: (groupid=0, jobs=1): err= 0: pid=82579: Mon Nov 18 13:27:22 2024 00:14:26.614 write: IOPS=20.3k, BW=79.1MiB/s (83.0MB/s)(791MiB/10001msec); 0 zone resets 00:14:26.614 clat (usec): min=32, max=3916, avg=48.63, stdev=78.49 00:14:26.614 lat (usec): min=33, max=3916, avg=49.05, stdev=78.50 00:14:26.614 clat percentiles (usec): 00:14:26.614 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:14:26.614 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:14:26.614 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 54], 95.00th=[ 59], 00:14:26.614 | 99.00th=[ 69], 99.50th=[ 76], 99.90th=[ 1237], 99.95th=[ 2376], 00:14:26.614 | 99.99th=[ 3261] 00:14:26.614 bw ( KiB/s): min=66312, max=85848, per=99.94%, avg=80966.74, stdev=5175.78, samples=19 00:14:26.614 iops : min=16578, max=21462, avg=20241.68, stdev=1293.95, samples=19 00:14:26.614 lat (usec) : 50=84.00%, 100=15.69%, 250=0.14%, 500=0.04%, 750=0.01% 00:14:26.614 lat (usec) : 1000=0.01% 00:14:26.614 lat (msec) : 2=0.04%, 4=0.07% 00:14:26.614 cpu : usr=3.36%, sys=15.80%, ctx=202562, majf=0, minf=796 00:14:26.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.615 issued rwts: total=0,202562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:26.615 00:14:26.615 Run status group 0 (all jobs): 00:14:26.615 WRITE: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=791MiB (830MB), run=10001-10001msec 00:14:26.615 00:14:26.615 Disk stats (read/write): 00:14:26.615 ublkb0: ios=0/200522, merge=0/0, ticks=0/8084, in_queue=8085, util=99.07% 00:14:26.615 13:27:22 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.615 [2024-11-18 13:27:22.683902] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:26.615 [2024-11-18 13:27:22.716621] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:26.615 [2024-11-18 13:27:22.717571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:26.615 [2024-11-18 13:27:22.727190] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:26.615 [2024-11-18 13:27:22.727410] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:26.615 [2024-11-18 13:27:22.727422] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.615 13:27:22 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.615 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.873 [2024-11-18 13:27:22.743256] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:26.873 request: 00:14:26.873 { 00:14:26.873 "ublk_id": 0, 00:14:26.873 "method": "ublk_stop_disk", 00:14:26.873 "req_id": 1 00:14:26.873 } 00:14:26.873 Got JSON-RPC error response 00:14:26.873 response: 00:14:26.873 { 00:14:26.873 "code": -19, 00:14:26.873 "message": "No such device" 00:14:26.873 } 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.873 13:27:22 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.873 [2024-11-18 13:27:22.759236] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:26.873 [2024-11-18 13:27:22.760597] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:26.873 [2024-11-18 13:27:22.760626] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.873 13:27:22 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.873 13:27:22 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:26.873 13:27:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:26.873 00:14:26.873 real 0m10.738s 00:14:26.873 user 0m0.617s 00:14:26.873 sys 0m1.653s 00:14:26.873 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.874 13:27:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.874 ************************************ 00:14:26.874 END TEST test_create_ublk 00:14:26.874 ************************************ 00:14:26.874 13:27:22 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:26.874 13:27:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.874 13:27:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.874 13:27:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.874 ************************************ 00:14:26.874 START TEST test_create_multi_ublk 00:14:26.874 ************************************ 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.874 [2024-11-18 13:27:22.970182] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:26.874 [2024-11-18 13:27:22.971037] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.874 13:27:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.133 [2024-11-18 13:27:23.054278] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:27.133 [2024-11-18 13:27:23.054561] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:27.133 [2024-11-18 13:27:23.054575] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:27.133 [2024-11-18 13:27:23.054580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.133 [2024-11-18 13:27:23.066223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.133 [2024-11-18 13:27:23.066241] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.133 [2024-11-18 13:27:23.078197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.133 [2024-11-18 13:27:23.078666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:27.133 [2024-11-18 13:27:23.092183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.133 [2024-11-18 13:27:23.164277] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:27.133 [2024-11-18 13:27:23.164562] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:27.133 [2024-11-18 13:27:23.164573] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:27.133 [2024-11-18 13:27:23.164580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.133 [2024-11-18 13:27:23.176213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.133 [2024-11-18 13:27:23.176232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.133 [2024-11-18 13:27:23.188186] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.133 [2024-11-18 13:27:23.188654] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:27.133 [2024-11-18 13:27:23.201200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.133 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.392 [2024-11-18 13:27:23.284275] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:27.392 [2024-11-18 13:27:23.284555] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:27.392 [2024-11-18 13:27:23.284569] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:27.392 [2024-11-18 13:27:23.284573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.392 [2024-11-18 13:27:23.296201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.392 [2024-11-18 13:27:23.296217] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.392 [2024-11-18 13:27:23.308184] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.392 [2024-11-18 13:27:23.308651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:27.392 [2024-11-18 13:27:23.321190] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.392 [2024-11-18 13:27:23.416260] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:27.392 [2024-11-18 13:27:23.416548] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:27.392 [2024-11-18 13:27:23.416559] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:27.392 [2024-11-18 13:27:23.416565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.392 [2024-11-18 13:27:23.428204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.392 [2024-11-18 13:27:23.428224] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.392 [2024-11-18 13:27:23.440183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.392 [2024-11-18 13:27:23.440661] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:27.392 [2024-11-18 13:27:23.453201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:27.392 { 00:14:27.392 "ublk_device": "/dev/ublkb0", 00:14:27.392 "id": 0, 00:14:27.392 "queue_depth": 512, 00:14:27.392 "num_queues": 4, 00:14:27.392 "bdev_name": "Malloc0" 00:14:27.392 }, 00:14:27.392 { 00:14:27.392 "ublk_device": "/dev/ublkb1", 00:14:27.392 "id": 1, 00:14:27.392 "queue_depth": 512, 00:14:27.392 "num_queues": 4, 00:14:27.392 "bdev_name": "Malloc1" 00:14:27.392 }, 00:14:27.392 { 00:14:27.392 "ublk_device": "/dev/ublkb2", 00:14:27.392 "id": 2, 00:14:27.392 "queue_depth": 512, 00:14:27.392 "num_queues": 4, 00:14:27.392 "bdev_name": "Malloc2" 00:14:27.392 }, 00:14:27.392 { 00:14:27.392 "ublk_device": "/dev/ublkb3", 00:14:27.392 "id": 3, 00:14:27.392 "queue_depth": 512, 00:14:27.392 "num_queues": 4, 00:14:27.392 "bdev_name": "Malloc3" 00:14:27.392 } 00:14:27.392 ]' 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.392 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:27.650 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:27.908 13:27:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:27.908 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:27.908 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.167 [2024-11-18 13:27:24.124264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.167 [2024-11-18 13:27:24.160646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.167 [2024-11-18 13:27:24.161620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.167 [2024-11-18 13:27:24.167186] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.167 [2024-11-18 13:27:24.167444] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:28.167 [2024-11-18 13:27:24.167455] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.167 [2024-11-18 13:27:24.182259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.167 [2024-11-18 13:27:24.213620] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.167 [2024-11-18 13:27:24.214626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.167 [2024-11-18 13:27:24.223191] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.167 [2024-11-18 13:27:24.223404] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:28.167 [2024-11-18 13:27:24.223415] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.167 [2024-11-18 13:27:24.239248] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.167 [2024-11-18 13:27:24.269609] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.167 [2024-11-18 13:27:24.270569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.167 [2024-11-18 13:27:24.277191] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.167 [2024-11-18 13:27:24.277411] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:28.167 [2024-11-18 13:27:24.277422] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.167 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.425 [2024-11-18 13:27:24.293242] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.425 [2024-11-18 13:27:24.325185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.425 [2024-11-18 13:27:24.325758] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.425 [2024-11-18 13:27:24.333195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.425 [2024-11-18 13:27:24.333432] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:28.425 [2024-11-18 13:27:24.333443] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:28.425 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.425 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:28.425 [2024-11-18 13:27:24.525250] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:28.425 [2024-11-18 13:27:24.526047] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:28.425 [2024-11-18 13:27:24.526076] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:28.425 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:28.683 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:28.942 00:14:28.942 real 0m1.914s 00:14:28.942 user 0m0.785s 00:14:28.942 sys 0m0.157s 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.942 ************************************ 00:14:28.942 END TEST test_create_multi_ublk 00:14:28.942 13:27:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 ************************************ 00:14:28.942 13:27:24 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:28.942 13:27:24 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:28.942 13:27:24 ublk -- ublk/ublk.sh@130 -- # killprocess 82534 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@954 -- # '[' -z 82534 ']' 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@958 -- # kill -0 82534 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@959 -- # uname 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82534 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.942 killing process with pid 82534 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82534' 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@973 -- # kill 82534 00:14:28.942 13:27:24 ublk -- common/autotest_common.sh@978 -- # wait 82534 00:14:29.202 [2024-11-18 13:27:25.085033] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.202 [2024-11-18 13:27:25.085085] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:29.462 00:14:29.462 real 0m17.586s 00:14:29.462 user 0m27.664s 00:14:29.462 sys 0m7.697s 00:14:29.462 13:27:25 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.462 ************************************ 00:14:29.462 END TEST ublk 00:14:29.462 ************************************ 00:14:29.462 13:27:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.462 13:27:25 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:29.462 13:27:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.462 13:27:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.462 13:27:25 -- common/autotest_common.sh@10 -- # set +x 00:14:29.462 ************************************ 00:14:29.462 START TEST ublk_recovery 00:14:29.462 ************************************ 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:29.462 * Looking for test storage... 00:14:29.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.462 13:27:25 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:29.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.462 --rc genhtml_branch_coverage=1 00:14:29.462 --rc genhtml_function_coverage=1 00:14:29.462 --rc genhtml_legend=1 00:14:29.462 --rc geninfo_all_blocks=1 00:14:29.462 --rc geninfo_unexecuted_blocks=1 00:14:29.462 00:14:29.462 ' 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:29.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.462 --rc genhtml_branch_coverage=1 00:14:29.462 --rc genhtml_function_coverage=1 00:14:29.462 --rc genhtml_legend=1 00:14:29.462 --rc geninfo_all_blocks=1 00:14:29.462 --rc geninfo_unexecuted_blocks=1 00:14:29.462 00:14:29.462 ' 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:29.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.462 --rc genhtml_branch_coverage=1 00:14:29.462 --rc genhtml_function_coverage=1 00:14:29.462 --rc genhtml_legend=1 00:14:29.462 --rc geninfo_all_blocks=1 00:14:29.462 --rc geninfo_unexecuted_blocks=1 00:14:29.462 00:14:29.462 ' 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:29.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.462 --rc genhtml_branch_coverage=1 00:14:29.462 --rc genhtml_function_coverage=1 00:14:29.462 --rc genhtml_legend=1 00:14:29.462 --rc geninfo_all_blocks=1 00:14:29.462 --rc geninfo_unexecuted_blocks=1 00:14:29.462 00:14:29.462 ' 00:14:29.462 13:27:25 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:29.462 13:27:25 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:29.462 13:27:25 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:29.462 13:27:25 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82899 00:14:29.462 13:27:25 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.462 13:27:25 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82899 00:14:29.462 13:27:25 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82899 ']' 00:14:29.463 13:27:25 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.463 13:27:25 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.463 13:27:25 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.463 13:27:25 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.463 13:27:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.463 13:27:25 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:29.722 [2024-11-18 13:27:25.645456] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:29.722 [2024-11-18 13:27:25.645606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82899 ] 00:14:29.722 [2024-11-18 13:27:25.801937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:29.722 [2024-11-18 13:27:25.827417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.722 [2024-11-18 13:27:25.827474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:30.657 13:27:26 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.657 [2024-11-18 13:27:26.486184] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:30.657 [2024-11-18 13:27:26.487131] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.657 13:27:26 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.657 malloc0 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.657 13:27:26 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.657 [2024-11-18 13:27:26.518286] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:30.657 [2024-11-18 13:27:26.518373] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:30.657 [2024-11-18 13:27:26.518379] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:30.657 [2024-11-18 13:27:26.518385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.657 [2024-11-18 13:27:26.527251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.657 [2024-11-18 13:27:26.527273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.657 [2024-11-18 13:27:26.534193] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.657 [2024-11-18 13:27:26.534304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:30.657 [2024-11-18 13:27:26.549186] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.657 1 00:14:30.657 13:27:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.657 13:27:26 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:31.592 13:27:27 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82932 00:14:31.592 13:27:27 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:31.592 13:27:27 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:31.592 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.592 fio-3.35 00:14:31.592 Starting 1 process 00:14:36.891 13:27:32 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82899 00:14:36.891 13:27:32 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:42.176 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82899 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:42.176 13:27:37 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83043 00:14:42.176 13:27:37 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.176 13:27:37 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83043 00:14:42.176 13:27:37 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 83043 ']' 00:14:42.176 13:27:37 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.176 13:27:37 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.176 13:27:37 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.176 13:27:37 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.176 13:27:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.176 13:27:37 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:42.176 [2024-11-18 13:27:37.630709] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:14:42.176 [2024-11-18 13:27:37.630805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83043 ] 00:14:42.176 [2024-11-18 13:27:37.782545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:42.176 [2024-11-18 13:27:37.807138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.176 [2024-11-18 13:27:37.807305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:42.436 13:27:38 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.436 [2024-11-18 13:27:38.494190] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:42.436 [2024-11-18 13:27:38.496039] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.436 13:27:38 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.436 malloc0 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.436 13:27:38 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.436 [2024-11-18 13:27:38.542357] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:42.436 [2024-11-18 13:27:38.542420] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:42.436 [2024-11-18 13:27:38.542429] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:42.436 [2024-11-18 13:27:38.550246] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:42.436 [2024-11-18 13:27:38.550275] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:42.436 1 00:14:42.436 13:27:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.436 13:27:38 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82932 00:14:43.813 [2024-11-18 13:27:39.550324] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:43.813 [2024-11-18 13:27:39.554186] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:43.813 [2024-11-18 13:27:39.554201] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:44.747 [2024-11-18 13:27:40.554221] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:44.747 [2024-11-18 13:27:40.562190] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:44.747 [2024-11-18 13:27:40.562207] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:45.681 [2024-11-18 13:27:41.562230] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:45.681 [2024-11-18 13:27:41.570194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:45.681 [2024-11-18 13:27:41.570213] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:45.681 [2024-11-18 13:27:41.570220] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:45.681 [2024-11-18 13:27:41.570292] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:07.636 [2024-11-18 13:28:02.846189] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:07.636 [2024-11-18 13:28:02.849543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:07.636 [2024-11-18 13:28:02.855354] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:07.636 [2024-11-18 13:28:02.855374] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:34.173 00:15:34.173 fio_test: (groupid=0, jobs=1): err= 0: pid=82935: Mon Nov 18 13:28:27 2024 00:15:34.173 read: IOPS=15.3k, BW=59.6MiB/s (62.5MB/s)(3576MiB/60001msec) 00:15:34.173 slat (nsec): min=1104, max=231908, avg=4786.43, stdev=1391.27 00:15:34.173 clat (usec): min=640, max=30300k, avg=4121.76, stdev=251332.96 00:15:34.173 lat (usec): min=646, max=30300k, avg=4126.55, stdev=251332.96 00:15:34.173 clat percentiles (usec): 00:15:34.173 | 1.00th=[ 1614], 5.00th=[ 1696], 10.00th=[ 1729], 20.00th=[ 1745], 00:15:34.173 | 30.00th=[ 1762], 40.00th=[ 1778], 50.00th=[ 1795], 60.00th=[ 1811], 00:15:34.173 | 70.00th=[ 1844], 80.00th=[ 1975], 90.00th=[ 2376], 95.00th=[ 3490], 00:15:34.173 | 99.00th=[ 5538], 99.50th=[ 6063], 99.90th=[ 7242], 99.95th=[ 8160], 00:15:34.173 | 99.99th=[13042] 00:15:34.173 bw ( KiB/s): min=54936, max=135000, per=100.00%, avg=122117.56, stdev=21305.97, samples=59 00:15:34.173 iops : min=13734, max=33750, avg=30529.39, stdev=5326.49, samples=59 00:15:34.173 write: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(3570MiB/60001msec); 0 zone resets 00:15:34.173 slat (nsec): min=1143, max=619550, avg=4817.23, stdev=1531.87 00:15:34.173 clat (usec): min=601, max=30300k, avg=4264.32, stdev=255495.17 00:15:34.173 lat (usec): min=606, max=30300k, avg=4269.14, stdev=255495.17 00:15:34.173 clat percentiles (usec): 00:15:34.173 | 1.00th=[ 1647], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1844], 00:15:34.173 | 30.00th=[ 1860], 40.00th=[ 1876], 50.00th=[ 1876], 60.00th=[ 1893], 00:15:34.173 | 70.00th=[ 1926], 80.00th=[ 2008], 90.00th=[ 2474], 95.00th=[ 3392], 00:15:34.173 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7242], 99.95th=[ 8094], 00:15:34.173 | 99.99th=[13173] 00:15:34.173 bw ( KiB/s): min=53608, max=135048, per=100.00%, avg=121937.22, stdev=21347.05, samples=59 00:15:34.173 iops : min=13402, max=33762, avg=30484.31, stdev=5336.76, samples=59 00:15:34.173 lat (usec) : 750=0.01% 00:15:34.173 lat (msec) : 2=79.98%, 4=15.85%, 10=4.15%, 20=0.02%, >=2000=0.01% 00:15:34.173 cpu : usr=3.29%, sys=14.93%, ctx=61327, majf=0, minf=14 00:15:34.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:34.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.173 issued rwts: total=915401,913942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.173 00:15:34.173 Run status group 0 (all jobs): 00:15:34.173 READ: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=3576MiB (3749MB), run=60001-60001msec 00:15:34.173 WRITE: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=3570MiB (3744MB), run=60001-60001msec 00:15:34.173 00:15:34.173 Disk stats (read/write): 00:15:34.173 ublkb1: ios=911756/910248, merge=0/0, ticks=3722905/3775068, in_queue=7497974, util=99.89% 00:15:34.173 13:28:27 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.173 [2024-11-18 13:28:27.817650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:34.173 [2024-11-18 13:28:27.860192] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:34.173 [2024-11-18 13:28:27.860349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:34.173 [2024-11-18 13:28:27.864325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:34.173 [2024-11-18 13:28:27.864408] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:34.173 [2024-11-18 13:28:27.864419] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.173 13:28:27 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.173 [2024-11-18 13:28:27.883261] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:34.173 [2024-11-18 13:28:27.884126] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:34.173 [2024-11-18 13:28:27.884155] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.173 13:28:27 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:34.173 13:28:27 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:34.173 13:28:27 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83043 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 83043 ']' 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 83043 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83043 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.173 killing process with pid 83043 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83043' 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@973 -- # kill 83043 00:15:34.173 13:28:27 ublk_recovery -- common/autotest_common.sh@978 -- # wait 83043 00:15:34.173 [2024-11-18 13:28:28.078391] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:34.173 [2024-11-18 13:28:28.078449] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:34.173 ************************************ 00:15:34.173 END TEST ublk_recovery 00:15:34.173 ************************************ 00:15:34.173 00:15:34.173 real 1m2.957s 00:15:34.173 user 1m45.753s 00:15:34.173 sys 0m20.700s 00:15:34.173 13:28:28 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.173 13:28:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.173 13:28:28 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:34.173 13:28:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:34.173 13:28:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.173 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.173 13:28:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:34.173 13:28:28 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:34.173 13:28:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.173 13:28:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.173 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.173 ************************************ 00:15:34.173 START TEST ftl 00:15:34.173 ************************************ 00:15:34.173 13:28:28 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:34.173 * Looking for test storage... 00:15:34.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.173 13:28:28 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:34.173 13:28:28 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:34.173 13:28:28 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:34.173 13:28:28 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:34.173 13:28:28 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.173 13:28:28 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.173 13:28:28 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.173 13:28:28 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.173 13:28:28 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.173 13:28:28 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.173 13:28:28 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.173 13:28:28 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.173 13:28:28 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.173 13:28:28 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.173 13:28:28 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.173 13:28:28 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:34.173 13:28:28 ftl -- scripts/common.sh@345 -- # : 1 00:15:34.173 13:28:28 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.173 13:28:28 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.173 13:28:28 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:34.174 13:28:28 ftl -- scripts/common.sh@353 -- # local d=1 00:15:34.174 13:28:28 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.174 13:28:28 ftl -- scripts/common.sh@355 -- # echo 1 00:15:34.174 13:28:28 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.174 13:28:28 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:34.174 13:28:28 ftl -- scripts/common.sh@353 -- # local d=2 00:15:34.174 13:28:28 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.174 13:28:28 ftl -- scripts/common.sh@355 -- # echo 2 00:15:34.174 13:28:28 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.174 13:28:28 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.174 13:28:28 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.174 13:28:28 ftl -- scripts/common.sh@368 -- # return 0 00:15:34.174 13:28:28 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.174 13:28:28 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.174 --rc genhtml_branch_coverage=1 00:15:34.174 --rc genhtml_function_coverage=1 00:15:34.174 --rc genhtml_legend=1 00:15:34.174 --rc geninfo_all_blocks=1 00:15:34.174 --rc geninfo_unexecuted_blocks=1 00:15:34.174 00:15:34.174 ' 00:15:34.174 13:28:28 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.174 --rc genhtml_branch_coverage=1 00:15:34.174 --rc genhtml_function_coverage=1 00:15:34.174 --rc genhtml_legend=1 00:15:34.174 --rc geninfo_all_blocks=1 00:15:34.174 --rc geninfo_unexecuted_blocks=1 00:15:34.174 00:15:34.174 ' 00:15:34.174 13:28:28 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.174 --rc genhtml_branch_coverage=1 00:15:34.174 --rc genhtml_function_coverage=1 00:15:34.174 --rc genhtml_legend=1 00:15:34.174 --rc geninfo_all_blocks=1 00:15:34.174 --rc geninfo_unexecuted_blocks=1 00:15:34.174 00:15:34.174 ' 00:15:34.174 13:28:28 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:34.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.174 --rc genhtml_branch_coverage=1 00:15:34.174 --rc genhtml_function_coverage=1 00:15:34.174 --rc genhtml_legend=1 00:15:34.174 --rc geninfo_all_blocks=1 00:15:34.174 --rc geninfo_unexecuted_blocks=1 00:15:34.174 00:15:34.174 ' 00:15:34.174 13:28:28 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:34.174 13:28:28 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:34.174 13:28:28 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.174 13:28:28 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:34.174 13:28:28 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:34.174 13:28:28 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:34.174 13:28:28 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.174 13:28:28 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:34.174 13:28:28 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:34.174 13:28:28 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.174 13:28:28 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.174 13:28:28 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:34.174 13:28:28 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:34.174 13:28:28 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.174 13:28:28 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:34.174 13:28:28 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:34.174 13:28:28 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:34.174 13:28:28 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.174 13:28:28 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.174 13:28:28 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:34.174 13:28:28 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:34.174 13:28:28 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.174 13:28:28 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:34.174 13:28:28 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.174 13:28:28 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:34.174 13:28:28 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:34.174 13:28:28 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:34.174 13:28:28 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.174 13:28:28 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.174 13:28:28 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.174 13:28:28 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:34.174 13:28:28 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:34.174 13:28:28 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:34.174 13:28:28 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:34.174 13:28:28 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:34.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:34.174 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.174 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.174 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.174 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:34.174 13:28:29 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83842 00:15:34.174 13:28:29 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:34.174 13:28:29 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83842 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@835 -- # '[' -z 83842 ']' 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:34.174 [2024-11-18 13:28:29.138751] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:15:34.174 [2024-11-18 13:28:29.138881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83842 ] 00:15:34.174 [2024-11-18 13:28:29.296385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.174 [2024-11-18 13:28:29.320624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.174 13:28:29 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:34.174 13:28:29 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:34.174 13:28:30 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:34.433 13:28:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:34.433 13:28:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:34.999 13:28:30 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:34.999 13:28:30 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:34.999 13:28:30 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@50 -- # break 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@63 -- # break 00:15:35.259 13:28:31 ftl -- ftl/ftl.sh@66 -- # killprocess 83842 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@954 -- # '[' -z 83842 ']' 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@958 -- # kill -0 83842 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@959 -- # uname 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83842 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.259 killing process with pid 83842 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83842' 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@973 -- # kill 83842 00:15:35.259 13:28:31 ftl -- common/autotest_common.sh@978 -- # wait 83842 00:15:35.520 13:28:31 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:35.520 13:28:31 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:35.520 13:28:31 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:35.520 13:28:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.520 13:28:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:35.520 ************************************ 00:15:35.520 START TEST ftl_fio_basic 00:15:35.520 ************************************ 00:15:35.520 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:35.780 * Looking for test storage... 00:15:35.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.780 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.780 --rc genhtml_branch_coverage=1 00:15:35.781 --rc genhtml_function_coverage=1 00:15:35.781 --rc genhtml_legend=1 00:15:35.781 --rc geninfo_all_blocks=1 00:15:35.781 --rc geninfo_unexecuted_blocks=1 00:15:35.781 00:15:35.781 ' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:35.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.781 --rc genhtml_branch_coverage=1 00:15:35.781 --rc genhtml_function_coverage=1 00:15:35.781 --rc genhtml_legend=1 00:15:35.781 --rc geninfo_all_blocks=1 00:15:35.781 --rc geninfo_unexecuted_blocks=1 00:15:35.781 00:15:35.781 ' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:35.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.781 --rc genhtml_branch_coverage=1 00:15:35.781 --rc genhtml_function_coverage=1 00:15:35.781 --rc genhtml_legend=1 00:15:35.781 --rc geninfo_all_blocks=1 00:15:35.781 --rc geninfo_unexecuted_blocks=1 00:15:35.781 00:15:35.781 ' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:35.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.781 --rc genhtml_branch_coverage=1 00:15:35.781 --rc genhtml_function_coverage=1 00:15:35.781 --rc genhtml_legend=1 00:15:35.781 --rc geninfo_all_blocks=1 00:15:35.781 --rc geninfo_unexecuted_blocks=1 00:15:35.781 00:15:35.781 ' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83952 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83952 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 83952 ']' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:35.781 13:28:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.781 [2024-11-18 13:28:31.850375] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:15:35.781 [2024-11-18 13:28:31.850472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83952 ] 00:15:36.041 [2024-11-18 13:28:31.997834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:36.041 [2024-11-18 13:28:32.016718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.041 [2024-11-18 13:28:32.016825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.041 [2024-11-18 13:28:32.016911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.607 13:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.608 13:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:36.608 13:28:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:36.608 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:36.608 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:36.608 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:36.608 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:36.608 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:36.866 13:28:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:37.125 { 00:15:37.125 "name": "nvme0n1", 00:15:37.125 "aliases": [ 00:15:37.125 "c5664999-6ff9-4175-8c3a-1842b50e7215" 00:15:37.125 ], 00:15:37.125 "product_name": "NVMe disk", 00:15:37.125 "block_size": 4096, 00:15:37.125 "num_blocks": 1310720, 00:15:37.125 "uuid": "c5664999-6ff9-4175-8c3a-1842b50e7215", 00:15:37.125 "numa_id": -1, 00:15:37.125 "assigned_rate_limits": { 00:15:37.125 "rw_ios_per_sec": 0, 00:15:37.125 "rw_mbytes_per_sec": 0, 00:15:37.125 "r_mbytes_per_sec": 0, 00:15:37.125 "w_mbytes_per_sec": 0 00:15:37.125 }, 00:15:37.125 "claimed": false, 00:15:37.125 "zoned": false, 00:15:37.125 "supported_io_types": { 00:15:37.125 "read": true, 00:15:37.125 "write": true, 00:15:37.125 "unmap": true, 00:15:37.125 "flush": true, 00:15:37.125 "reset": true, 00:15:37.125 "nvme_admin": true, 00:15:37.125 "nvme_io": true, 00:15:37.125 "nvme_io_md": false, 00:15:37.125 "write_zeroes": true, 00:15:37.125 "zcopy": false, 00:15:37.125 "get_zone_info": false, 00:15:37.125 "zone_management": false, 00:15:37.125 "zone_append": false, 00:15:37.125 "compare": true, 00:15:37.125 "compare_and_write": false, 00:15:37.125 "abort": true, 00:15:37.125 "seek_hole": false, 00:15:37.125 "seek_data": false, 00:15:37.125 "copy": true, 00:15:37.125 "nvme_iov_md": false 00:15:37.125 }, 00:15:37.125 "driver_specific": { 00:15:37.125 "nvme": [ 00:15:37.125 { 00:15:37.125 "pci_address": "0000:00:11.0", 00:15:37.125 "trid": { 00:15:37.125 "trtype": "PCIe", 00:15:37.125 "traddr": "0000:00:11.0" 00:15:37.125 }, 00:15:37.125 "ctrlr_data": { 00:15:37.125 "cntlid": 0, 00:15:37.125 "vendor_id": "0x1b36", 00:15:37.125 "model_number": "QEMU NVMe Ctrl", 00:15:37.125 "serial_number": "12341", 00:15:37.125 "firmware_revision": "8.0.0", 00:15:37.125 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:37.125 "oacs": { 00:15:37.125 "security": 0, 00:15:37.125 "format": 1, 00:15:37.125 "firmware": 0, 00:15:37.125 "ns_manage": 1 00:15:37.125 }, 00:15:37.125 "multi_ctrlr": false, 00:15:37.125 "ana_reporting": false 00:15:37.125 }, 00:15:37.125 "vs": { 00:15:37.125 "nvme_version": "1.4" 00:15:37.125 }, 00:15:37.125 "ns_data": { 00:15:37.125 "id": 1, 00:15:37.125 "can_share": false 00:15:37.125 } 00:15:37.125 } 00:15:37.125 ], 00:15:37.125 "mp_policy": "active_passive" 00:15:37.125 } 00:15:37.125 } 00:15:37.125 ]' 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:37.125 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:37.384 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:37.384 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:37.642 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e68c5ddd-79bd-4e9f-9151-5839254c75c9 00:15:37.642 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e68c5ddd-79bd-4e9f-9151-5839254c75c9 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:37.907 13:28:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:38.176 { 00:15:38.176 "name": "2c93f6b4-8376-490b-b6c8-29830fb419ae", 00:15:38.176 "aliases": [ 00:15:38.176 "lvs/nvme0n1p0" 00:15:38.176 ], 00:15:38.176 "product_name": "Logical Volume", 00:15:38.176 "block_size": 4096, 00:15:38.176 "num_blocks": 26476544, 00:15:38.176 "uuid": "2c93f6b4-8376-490b-b6c8-29830fb419ae", 00:15:38.176 "assigned_rate_limits": { 00:15:38.176 "rw_ios_per_sec": 0, 00:15:38.176 "rw_mbytes_per_sec": 0, 00:15:38.176 "r_mbytes_per_sec": 0, 00:15:38.176 "w_mbytes_per_sec": 0 00:15:38.176 }, 00:15:38.176 "claimed": false, 00:15:38.176 "zoned": false, 00:15:38.176 "supported_io_types": { 00:15:38.176 "read": true, 00:15:38.176 "write": true, 00:15:38.176 "unmap": true, 00:15:38.176 "flush": false, 00:15:38.176 "reset": true, 00:15:38.176 "nvme_admin": false, 00:15:38.176 "nvme_io": false, 00:15:38.176 "nvme_io_md": false, 00:15:38.176 "write_zeroes": true, 00:15:38.176 "zcopy": false, 00:15:38.176 "get_zone_info": false, 00:15:38.176 "zone_management": false, 00:15:38.176 "zone_append": false, 00:15:38.176 "compare": false, 00:15:38.176 "compare_and_write": false, 00:15:38.176 "abort": false, 00:15:38.176 "seek_hole": true, 00:15:38.176 "seek_data": true, 00:15:38.176 "copy": false, 00:15:38.176 "nvme_iov_md": false 00:15:38.176 }, 00:15:38.176 "driver_specific": { 00:15:38.176 "lvol": { 00:15:38.176 "lvol_store_uuid": "e68c5ddd-79bd-4e9f-9151-5839254c75c9", 00:15:38.176 "base_bdev": "nvme0n1", 00:15:38.176 "thin_provision": true, 00:15:38.176 "num_allocated_clusters": 0, 00:15:38.176 "snapshot": false, 00:15:38.176 "clone": false, 00:15:38.176 "esnap_clone": false 00:15:38.176 } 00:15:38.176 } 00:15:38.176 } 00:15:38.176 ]' 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:38.176 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:38.434 { 00:15:38.434 "name": "2c93f6b4-8376-490b-b6c8-29830fb419ae", 00:15:38.434 "aliases": [ 00:15:38.434 "lvs/nvme0n1p0" 00:15:38.434 ], 00:15:38.434 "product_name": "Logical Volume", 00:15:38.434 "block_size": 4096, 00:15:38.434 "num_blocks": 26476544, 00:15:38.434 "uuid": "2c93f6b4-8376-490b-b6c8-29830fb419ae", 00:15:38.434 "assigned_rate_limits": { 00:15:38.434 "rw_ios_per_sec": 0, 00:15:38.434 "rw_mbytes_per_sec": 0, 00:15:38.434 "r_mbytes_per_sec": 0, 00:15:38.434 "w_mbytes_per_sec": 0 00:15:38.434 }, 00:15:38.434 "claimed": false, 00:15:38.434 "zoned": false, 00:15:38.434 "supported_io_types": { 00:15:38.434 "read": true, 00:15:38.434 "write": true, 00:15:38.434 "unmap": true, 00:15:38.434 "flush": false, 00:15:38.434 "reset": true, 00:15:38.434 "nvme_admin": false, 00:15:38.434 "nvme_io": false, 00:15:38.434 "nvme_io_md": false, 00:15:38.434 "write_zeroes": true, 00:15:38.434 "zcopy": false, 00:15:38.434 "get_zone_info": false, 00:15:38.434 "zone_management": false, 00:15:38.434 "zone_append": false, 00:15:38.434 "compare": false, 00:15:38.434 "compare_and_write": false, 00:15:38.434 "abort": false, 00:15:38.434 "seek_hole": true, 00:15:38.434 "seek_data": true, 00:15:38.434 "copy": false, 00:15:38.434 "nvme_iov_md": false 00:15:38.434 }, 00:15:38.434 "driver_specific": { 00:15:38.434 "lvol": { 00:15:38.434 "lvol_store_uuid": "e68c5ddd-79bd-4e9f-9151-5839254c75c9", 00:15:38.434 "base_bdev": "nvme0n1", 00:15:38.434 "thin_provision": true, 00:15:38.434 "num_allocated_clusters": 0, 00:15:38.434 "snapshot": false, 00:15:38.434 "clone": false, 00:15:38.434 "esnap_clone": false 00:15:38.434 } 00:15:38.434 } 00:15:38.434 } 00:15:38.434 ]' 00:15:38.434 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:38.693 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:38.693 13:28:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c93f6b4-8376-490b-b6c8-29830fb419ae 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:38.951 { 00:15:38.951 "name": "2c93f6b4-8376-490b-b6c8-29830fb419ae", 00:15:38.951 "aliases": [ 00:15:38.951 "lvs/nvme0n1p0" 00:15:38.951 ], 00:15:38.951 "product_name": "Logical Volume", 00:15:38.951 "block_size": 4096, 00:15:38.951 "num_blocks": 26476544, 00:15:38.951 "uuid": "2c93f6b4-8376-490b-b6c8-29830fb419ae", 00:15:38.951 "assigned_rate_limits": { 00:15:38.951 "rw_ios_per_sec": 0, 00:15:38.951 "rw_mbytes_per_sec": 0, 00:15:38.951 "r_mbytes_per_sec": 0, 00:15:38.951 "w_mbytes_per_sec": 0 00:15:38.951 }, 00:15:38.951 "claimed": false, 00:15:38.951 "zoned": false, 00:15:38.951 "supported_io_types": { 00:15:38.951 "read": true, 00:15:38.951 "write": true, 00:15:38.951 "unmap": true, 00:15:38.951 "flush": false, 00:15:38.951 "reset": true, 00:15:38.951 "nvme_admin": false, 00:15:38.951 "nvme_io": false, 00:15:38.951 "nvme_io_md": false, 00:15:38.951 "write_zeroes": true, 00:15:38.951 "zcopy": false, 00:15:38.951 "get_zone_info": false, 00:15:38.951 "zone_management": false, 00:15:38.951 "zone_append": false, 00:15:38.951 "compare": false, 00:15:38.951 "compare_and_write": false, 00:15:38.951 "abort": false, 00:15:38.951 "seek_hole": true, 00:15:38.951 "seek_data": true, 00:15:38.951 "copy": false, 00:15:38.951 "nvme_iov_md": false 00:15:38.951 }, 00:15:38.951 "driver_specific": { 00:15:38.951 "lvol": { 00:15:38.951 "lvol_store_uuid": "e68c5ddd-79bd-4e9f-9151-5839254c75c9", 00:15:38.951 "base_bdev": "nvme0n1", 00:15:38.951 "thin_provision": true, 00:15:38.951 "num_allocated_clusters": 0, 00:15:38.951 "snapshot": false, 00:15:38.951 "clone": false, 00:15:38.951 "esnap_clone": false 00:15:38.951 } 00:15:38.951 } 00:15:38.951 } 00:15:38.951 ]' 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:38.951 13:28:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:39.211 13:28:35 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:39.211 13:28:35 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2c93f6b4-8376-490b-b6c8-29830fb419ae -c nvc0n1p0 --l2p_dram_limit 60 00:15:39.211 [2024-11-18 13:28:35.264352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.264393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:39.211 [2024-11-18 13:28:35.264404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:39.211 [2024-11-18 13:28:35.264412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.264470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.264486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:39.211 [2024-11-18 13:28:35.264493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:39.211 [2024-11-18 13:28:35.264502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.264529] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:39.211 [2024-11-18 13:28:35.264754] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:39.211 [2024-11-18 13:28:35.264766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.264773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:39.211 [2024-11-18 13:28:35.264779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:15:39.211 [2024-11-18 13:28:35.264786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.264814] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7a88c6fb-e0d3-46a6-a3cb-ab7cfae6d436 00:15:39.211 [2024-11-18 13:28:35.265839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.265856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:39.211 [2024-11-18 13:28:35.265865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:39.211 [2024-11-18 13:28:35.265871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.270652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.270675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:39.211 [2024-11-18 13:28:35.270684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.711 ms 00:15:39.211 [2024-11-18 13:28:35.270700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.270776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.270782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:39.211 [2024-11-18 13:28:35.270791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:39.211 [2024-11-18 13:28:35.270797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.270835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.270841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:39.211 [2024-11-18 13:28:35.270857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:39.211 [2024-11-18 13:28:35.270863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.270890] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:39.211 [2024-11-18 13:28:35.272162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.272194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:39.211 [2024-11-18 13:28:35.272202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.279 ms 00:15:39.211 [2024-11-18 13:28:35.272209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.272238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.272246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:39.211 [2024-11-18 13:28:35.272252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:39.211 [2024-11-18 13:28:35.272261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.272288] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:39.211 [2024-11-18 13:28:35.272400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:39.211 [2024-11-18 13:28:35.272408] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:39.211 [2024-11-18 13:28:35.272418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:39.211 [2024-11-18 13:28:35.272433] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:39.211 [2024-11-18 13:28:35.272443] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:39.211 [2024-11-18 13:28:35.272449] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:39.211 [2024-11-18 13:28:35.272456] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:39.211 [2024-11-18 13:28:35.272462] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:39.211 [2024-11-18 13:28:35.272470] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:39.211 [2024-11-18 13:28:35.272475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.272482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:39.211 [2024-11-18 13:28:35.272488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:15:39.211 [2024-11-18 13:28:35.272495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.272571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.211 [2024-11-18 13:28:35.272580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:39.211 [2024-11-18 13:28:35.272587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:39.211 [2024-11-18 13:28:35.272594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.211 [2024-11-18 13:28:35.272679] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:39.211 [2024-11-18 13:28:35.272687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:39.211 [2024-11-18 13:28:35.272694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.211 [2024-11-18 13:28:35.272701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.211 [2024-11-18 13:28:35.272707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:39.211 [2024-11-18 13:28:35.272713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:39.211 [2024-11-18 13:28:35.272718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:39.211 [2024-11-18 13:28:35.272725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:39.211 [2024-11-18 13:28:35.272730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:39.211 [2024-11-18 13:28:35.272736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.211 [2024-11-18 13:28:35.272741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:39.211 [2024-11-18 13:28:35.272748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:39.211 [2024-11-18 13:28:35.272753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.211 [2024-11-18 13:28:35.272762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:39.211 [2024-11-18 13:28:35.272770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:39.211 [2024-11-18 13:28:35.272776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.211 [2024-11-18 13:28:35.272790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:39.211 [2024-11-18 13:28:35.272797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:39.212 [2024-11-18 13:28:35.272802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:39.212 [2024-11-18 13:28:35.272815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.212 [2024-11-18 13:28:35.272828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:39.212 [2024-11-18 13:28:35.272835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.212 [2024-11-18 13:28:35.272847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:39.212 [2024-11-18 13:28:35.272853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.212 [2024-11-18 13:28:35.272866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:39.212 [2024-11-18 13:28:35.272874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.212 [2024-11-18 13:28:35.272887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:39.212 [2024-11-18 13:28:35.272893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.212 [2024-11-18 13:28:35.272906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:39.212 [2024-11-18 13:28:35.272913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:39.212 [2024-11-18 13:28:35.272918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.212 [2024-11-18 13:28:35.272925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:39.212 [2024-11-18 13:28:35.272931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:39.212 [2024-11-18 13:28:35.272939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:39.212 [2024-11-18 13:28:35.272952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:39.212 [2024-11-18 13:28:35.272958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.212 [2024-11-18 13:28:35.272964] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:39.212 [2024-11-18 13:28:35.272971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:39.212 [2024-11-18 13:28:35.272980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.212 [2024-11-18 13:28:35.272989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.212 [2024-11-18 13:28:35.273005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:39.212 [2024-11-18 13:28:35.273011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:39.212 [2024-11-18 13:28:35.273019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:39.212 [2024-11-18 13:28:35.273025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:39.212 [2024-11-18 13:28:35.273032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:39.212 [2024-11-18 13:28:35.273037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:39.212 [2024-11-18 13:28:35.273047] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:39.212 [2024-11-18 13:28:35.273055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.212 [2024-11-18 13:28:35.273063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:39.212 [2024-11-18 13:28:35.273069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:39.212 [2024-11-18 13:28:35.273077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:39.212 [2024-11-18 13:28:35.273083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:39.212 [2024-11-18 13:28:35.273091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:39.212 [2024-11-18 13:28:35.273096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:39.212 [2024-11-18 13:28:35.273106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:39.212 [2024-11-18 13:28:35.273112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:39.212 [2024-11-18 13:28:35.273120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:39.212 [2024-11-18 13:28:35.273126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:39.212 [2024-11-18 13:28:35.273133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:39.212 [2024-11-18 13:28:35.273139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:39.212 [2024-11-18 13:28:35.273147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:39.212 [2024-11-18 13:28:35.273153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:39.212 [2024-11-18 13:28:35.273161] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:39.212 [2024-11-18 13:28:35.273176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.212 [2024-11-18 13:28:35.273185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:39.212 [2024-11-18 13:28:35.273191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:39.212 [2024-11-18 13:28:35.273198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:39.212 [2024-11-18 13:28:35.273204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:39.212 [2024-11-18 13:28:35.273212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.212 [2024-11-18 13:28:35.273217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:39.212 [2024-11-18 13:28:35.273225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:15:39.212 [2024-11-18 13:28:35.273232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.212 [2024-11-18 13:28:35.273290] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:39.212 [2024-11-18 13:28:35.273298] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:41.112 [2024-11-18 13:28:37.172468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.112 [2024-11-18 13:28:37.172514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:41.112 [2024-11-18 13:28:37.172529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1899.166 ms 00:15:41.112 [2024-11-18 13:28:37.172536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.112 [2024-11-18 13:28:37.180676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.112 [2024-11-18 13:28:37.180705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:41.112 [2024-11-18 13:28:37.180719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.053 ms 00:15:41.112 [2024-11-18 13:28:37.180725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.112 [2024-11-18 13:28:37.180824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.112 [2024-11-18 13:28:37.180832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:41.112 [2024-11-18 13:28:37.180840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:41.112 [2024-11-18 13:28:37.180846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.112 [2024-11-18 13:28:37.198837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.112 [2024-11-18 13:28:37.198879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:41.112 [2024-11-18 13:28:37.198910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.947 ms 00:15:41.112 [2024-11-18 13:28:37.198919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.112 [2024-11-18 13:28:37.198981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.113 [2024-11-18 13:28:37.198992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:41.113 [2024-11-18 13:28:37.199003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:15:41.113 [2024-11-18 13:28:37.199011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.113 [2024-11-18 13:28:37.199432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.113 [2024-11-18 13:28:37.199450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:41.113 [2024-11-18 13:28:37.199463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:15:41.113 [2024-11-18 13:28:37.199475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.113 [2024-11-18 13:28:37.199624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.113 [2024-11-18 13:28:37.199635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:41.113 [2024-11-18 13:28:37.199648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:15:41.113 [2024-11-18 13:28:37.199671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.113 [2024-11-18 13:28:37.205591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.113 [2024-11-18 13:28:37.205620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:41.113 [2024-11-18 13:28:37.205633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.871 ms 00:15:41.113 [2024-11-18 13:28:37.205642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.113 [2024-11-18 13:28:37.214261] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:41.113 [2024-11-18 13:28:37.228265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.113 [2024-11-18 13:28:37.228290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:41.113 [2024-11-18 13:28:37.228299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.527 ms 00:15:41.113 [2024-11-18 13:28:37.228316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.262845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.262877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:41.371 [2024-11-18 13:28:37.262885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.501 ms 00:15:41.371 [2024-11-18 13:28:37.262895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.263042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.263058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:41.371 [2024-11-18 13:28:37.263064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:15:41.371 [2024-11-18 13:28:37.263072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.265427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.265457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:41.371 [2024-11-18 13:28:37.265466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:15:41.371 [2024-11-18 13:28:37.265474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.267437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.267463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:41.371 [2024-11-18 13:28:37.267471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.926 ms 00:15:41.371 [2024-11-18 13:28:37.267477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.267735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.267749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:41.371 [2024-11-18 13:28:37.267755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:15:41.371 [2024-11-18 13:28:37.267764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.286744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.286774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:41.371 [2024-11-18 13:28:37.286782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.956 ms 00:15:41.371 [2024-11-18 13:28:37.286790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.289995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.290031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:41.371 [2024-11-18 13:28:37.290047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:15:41.371 [2024-11-18 13:28:37.290055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.292351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.292376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:41.371 [2024-11-18 13:28:37.292383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:15:41.371 [2024-11-18 13:28:37.292390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.294829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.294857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:41.371 [2024-11-18 13:28:37.294864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.369 ms 00:15:41.371 [2024-11-18 13:28:37.294873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.371 [2024-11-18 13:28:37.294907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.371 [2024-11-18 13:28:37.294916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:41.372 [2024-11-18 13:28:37.294922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:41.372 [2024-11-18 13:28:37.294929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.372 [2024-11-18 13:28:37.294986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.372 [2024-11-18 13:28:37.294995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:41.372 [2024-11-18 13:28:37.295003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:41.372 [2024-11-18 13:28:37.295010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.372 [2024-11-18 13:28:37.295808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2031.126 ms, result 0 00:15:41.372 { 00:15:41.372 "name": "ftl0", 00:15:41.372 "uuid": "7a88c6fb-e0d3-46a6-a3cb-ab7cfae6d436" 00:15:41.372 } 00:15:41.372 13:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:41.372 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:41.372 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.372 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:41.372 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.372 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.372 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:41.631 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:41.631 [ 00:15:41.631 { 00:15:41.631 "name": "ftl0", 00:15:41.631 "aliases": [ 00:15:41.631 "7a88c6fb-e0d3-46a6-a3cb-ab7cfae6d436" 00:15:41.631 ], 00:15:41.631 "product_name": "FTL disk", 00:15:41.631 "block_size": 4096, 00:15:41.631 "num_blocks": 20971520, 00:15:41.631 "uuid": "7a88c6fb-e0d3-46a6-a3cb-ab7cfae6d436", 00:15:41.631 "assigned_rate_limits": { 00:15:41.631 "rw_ios_per_sec": 0, 00:15:41.631 "rw_mbytes_per_sec": 0, 00:15:41.631 "r_mbytes_per_sec": 0, 00:15:41.631 "w_mbytes_per_sec": 0 00:15:41.631 }, 00:15:41.631 "claimed": false, 00:15:41.631 "zoned": false, 00:15:41.631 "supported_io_types": { 00:15:41.631 "read": true, 00:15:41.631 "write": true, 00:15:41.631 "unmap": true, 00:15:41.631 "flush": true, 00:15:41.631 "reset": false, 00:15:41.631 "nvme_admin": false, 00:15:41.631 "nvme_io": false, 00:15:41.631 "nvme_io_md": false, 00:15:41.631 "write_zeroes": true, 00:15:41.631 "zcopy": false, 00:15:41.631 "get_zone_info": false, 00:15:41.631 "zone_management": false, 00:15:41.631 "zone_append": false, 00:15:41.631 "compare": false, 00:15:41.631 "compare_and_write": false, 00:15:41.631 "abort": false, 00:15:41.631 "seek_hole": false, 00:15:41.631 "seek_data": false, 00:15:41.631 "copy": false, 00:15:41.631 "nvme_iov_md": false 00:15:41.631 }, 00:15:41.631 "driver_specific": { 00:15:41.631 "ftl": { 00:15:41.631 "base_bdev": "2c93f6b4-8376-490b-b6c8-29830fb419ae", 00:15:41.631 "cache": "nvc0n1p0" 00:15:41.631 } 00:15:41.631 } 00:15:41.631 } 00:15:41.631 ] 00:15:41.631 13:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:41.631 13:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:41.631 13:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:41.890 13:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:41.890 13:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:42.150 [2024-11-18 13:28:38.109113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.109145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:42.150 [2024-11-18 13:28:38.109157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:42.150 [2024-11-18 13:28:38.109163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.109200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:42.150 [2024-11-18 13:28:38.109629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.109655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:42.150 [2024-11-18 13:28:38.109676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:15:42.150 [2024-11-18 13:28:38.109683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.110110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.110134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:42.150 [2024-11-18 13:28:38.110142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:15:42.150 [2024-11-18 13:28:38.110150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.112575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.112590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:42.150 [2024-11-18 13:28:38.112597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:15:42.150 [2024-11-18 13:28:38.112605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.117185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.117206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:42.150 [2024-11-18 13:28:38.117214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.545 ms 00:15:42.150 [2024-11-18 13:28:38.117222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.118324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.118353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:42.150 [2024-11-18 13:28:38.118359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:15:42.150 [2024-11-18 13:28:38.118368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.121756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.121786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:42.150 [2024-11-18 13:28:38.121797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.352 ms 00:15:42.150 [2024-11-18 13:28:38.121805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.121943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.121952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:42.150 [2024-11-18 13:28:38.121959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:15:42.150 [2024-11-18 13:28:38.121965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.123220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.123244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:42.150 [2024-11-18 13:28:38.123251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:15:42.150 [2024-11-18 13:28:38.123258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.124253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.124281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:42.150 [2024-11-18 13:28:38.124288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:15:42.150 [2024-11-18 13:28:38.124295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.125131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.125156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:42.150 [2024-11-18 13:28:38.125162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:15:42.150 [2024-11-18 13:28:38.125180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.125991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.150 [2024-11-18 13:28:38.126018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:42.150 [2024-11-18 13:28:38.126025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:15:42.150 [2024-11-18 13:28:38.126031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.150 [2024-11-18 13:28:38.126065] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:42.150 [2024-11-18 13:28:38.126077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:42.150 [2024-11-18 13:28:38.126316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:42.151 [2024-11-18 13:28:38.126759] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:42.151 [2024-11-18 13:28:38.126765] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7a88c6fb-e0d3-46a6-a3cb-ab7cfae6d436 00:15:42.151 [2024-11-18 13:28:38.126774] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:42.151 [2024-11-18 13:28:38.126779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:42.151 [2024-11-18 13:28:38.126787] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:42.152 [2024-11-18 13:28:38.126800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:42.152 [2024-11-18 13:28:38.126807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:42.152 [2024-11-18 13:28:38.126819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:42.152 [2024-11-18 13:28:38.126826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:42.152 [2024-11-18 13:28:38.126831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:42.152 [2024-11-18 13:28:38.126837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:42.152 [2024-11-18 13:28:38.126843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.152 [2024-11-18 13:28:38.126850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:42.152 [2024-11-18 13:28:38.126856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:15:42.152 [2024-11-18 13:28:38.126863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.128280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.152 [2024-11-18 13:28:38.128298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:42.152 [2024-11-18 13:28:38.128306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:15:42.152 [2024-11-18 13:28:38.128313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.128389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.152 [2024-11-18 13:28:38.128397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:42.152 [2024-11-18 13:28:38.128412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:42.152 [2024-11-18 13:28:38.128421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.133293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.133316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.152 [2024-11-18 13:28:38.133323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.133332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.133391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.133399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.152 [2024-11-18 13:28:38.133405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.133421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.133498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.133509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.152 [2024-11-18 13:28:38.133515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.133522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.133543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.133550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.152 [2024-11-18 13:28:38.133556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.133562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.142262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.142295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.152 [2024-11-18 13:28:38.142304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.142311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.149485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.149518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.152 [2024-11-18 13:28:38.149526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.149534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.149584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.149594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:42.152 [2024-11-18 13:28:38.149600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.149607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.149672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.149680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:42.152 [2024-11-18 13:28:38.149686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.149694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.149765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.149779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:42.152 [2024-11-18 13:28:38.149785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.149792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.149828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.149836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:42.152 [2024-11-18 13:28:38.149849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.149856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.149893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.149907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:42.152 [2024-11-18 13:28:38.149913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.149920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.149967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.152 [2024-11-18 13:28:38.149976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:42.152 [2024-11-18 13:28:38.149982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.152 [2024-11-18 13:28:38.149989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.152 [2024-11-18 13:28:38.150125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.994 ms, result 0 00:15:42.152 true 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83952 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 83952 ']' 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 83952 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83952 00:15:42.152 killing process with pid 83952 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83952' 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 83952 00:15:42.152 13:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 83952 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:47.416 13:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:47.416 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:47.416 fio-3.35 00:15:47.416 Starting 1 thread 00:15:50.701 00:15:50.701 test: (groupid=0, jobs=1): err= 0: pid=84109: Mon Nov 18 13:28:46 2024 00:15:50.701 read: IOPS=1229, BW=81.6MiB/s (85.6MB/s)(255MiB/3118msec) 00:15:50.701 slat (nsec): min=2917, max=89511, avg=3990.50, stdev=2208.78 00:15:50.701 clat (usec): min=253, max=1135, avg=370.22, stdev=90.75 00:15:50.701 lat (usec): min=256, max=1140, avg=374.21, stdev=91.35 00:15:50.701 clat percentiles (usec): 00:15:50.701 | 1.00th=[ 289], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:15:50.701 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:15:50.701 | 70.00th=[ 347], 80.00th=[ 429], 90.00th=[ 523], 95.00th=[ 529], 00:15:50.701 | 99.00th=[ 742], 99.50th=[ 816], 99.90th=[ 963], 99.95th=[ 1090], 00:15:50.701 | 99.99th=[ 1139] 00:15:50.701 write: IOPS=1237, BW=82.2MiB/s (86.2MB/s)(256MiB/3115msec); 0 zone resets 00:15:50.701 slat (nsec): min=13542, max=84312, avg=17428.40, stdev=3412.03 00:15:50.701 clat (usec): min=285, max=1081, avg=406.04, stdev=106.17 00:15:50.701 lat (usec): min=299, max=1099, avg=423.47, stdev=107.34 00:15:50.701 clat percentiles (usec): 00:15:50.701 | 1.00th=[ 310], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 347], 00:15:50.701 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:15:50.701 | 70.00th=[ 375], 80.00th=[ 482], 90.00th=[ 553], 95.00th=[ 619], 00:15:50.701 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 930], 99.95th=[ 996], 00:15:50.701 | 99.99th=[ 1090] 00:15:50.701 bw ( KiB/s): min=66776, max=93568, per=99.60%, avg=83843.67, stdev=12572.87, samples=6 00:15:50.701 iops : min= 982, max= 1376, avg=1232.83, stdev=185.13, samples=6 00:15:50.701 lat (usec) : 500=84.52%, 750=14.28%, 1000=1.14% 00:15:50.701 lat (msec) : 2=0.05% 00:15:50.701 cpu : usr=99.29%, sys=0.10%, ctx=6, majf=0, minf=1181 00:15:50.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.701 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.701 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.701 00:15:50.701 Run status group 0 (all jobs): 00:15:50.701 READ: bw=81.6MiB/s (85.6MB/s), 81.6MiB/s-81.6MiB/s (85.6MB/s-85.6MB/s), io=255MiB (267MB), run=3118-3118msec 00:15:50.701 WRITE: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=256MiB (269MB), run=3115-3115msec 00:15:51.271 ----------------------------------------------------- 00:15:51.271 Suppressions used: 00:15:51.271 count bytes template 00:15:51.271 1 5 /usr/src/fio/parse.c 00:15:51.271 1 8 libtcmalloc_minimal.so 00:15:51.271 1 904 libcrypto.so 00:15:51.271 ----------------------------------------------------- 00:15:51.271 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.271 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:51.532 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.532 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.532 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:51.532 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.532 13:28:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:51.532 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:51.532 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:51.532 fio-3.35 00:15:51.532 Starting 2 threads 00:16:18.101 00:16:18.101 first_half: (groupid=0, jobs=1): err= 0: pid=84195: Mon Nov 18 13:29:10 2024 00:16:18.101 read: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(255MiB/21645msec) 00:16:18.101 slat (nsec): min=2961, max=73456, avg=3969.47, stdev=1145.36 00:16:18.101 clat (usec): min=593, max=442908, avg=33562.59, stdev=17619.21 00:16:18.101 lat (usec): min=596, max=442913, avg=33566.56, stdev=17619.30 00:16:18.101 clat percentiles (msec): 00:16:18.101 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 29], 00:16:18.101 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:16:18.101 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 47], 00:16:18.101 | 99.00th=[ 123], 99.50th=[ 136], 99.90th=[ 180], 99.95th=[ 326], 00:16:18.101 | 99.99th=[ 418] 00:16:18.101 write: IOPS=3942, BW=15.4MiB/s (16.1MB/s)(256MiB/16621msec); 0 zone resets 00:16:18.101 slat (usec): min=3, max=1798, avg= 6.24, stdev=13.75 00:16:18.101 clat (usec): min=350, max=87198, avg=8864.23, stdev=14059.99 00:16:18.101 lat (usec): min=366, max=87204, avg=8870.48, stdev=14060.18 00:16:18.101 clat percentiles (usec): 00:16:18.101 | 1.00th=[ 635], 5.00th=[ 709], 10.00th=[ 766], 20.00th=[ 963], 00:16:18.101 | 30.00th=[ 1647], 40.00th=[ 2868], 50.00th=[ 3884], 60.00th=[ 4752], 00:16:18.101 | 70.00th=[ 5604], 80.00th=[13435], 90.00th=[19792], 95.00th=[55313], 00:16:18.101 | 99.00th=[61080], 99.50th=[62129], 99.90th=[77071], 99.95th=[78119], 00:16:18.101 | 99.99th=[86508] 00:16:18.101 bw ( KiB/s): min= 801, max=52904, per=100.00%, avg=29127.17, stdev=15135.17, samples=18 00:16:18.101 iops : min= 200, max=13226, avg=7281.78, stdev=3783.82, samples=18 00:16:18.101 lat (usec) : 500=0.02%, 750=4.17%, 1000=6.50% 00:16:18.101 lat (msec) : 2=5.55%, 4=9.78%, 10=12.87%, 20=7.29%, 50=48.60% 00:16:18.101 lat (msec) : 100=4.32%, 250=0.85%, 500=0.04% 00:16:18.101 cpu : usr=99.38%, sys=0.15%, ctx=51, majf=0, minf=5615 00:16:18.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:18.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.101 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.101 issued rwts: total=65199,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.101 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.101 second_half: (groupid=0, jobs=1): err= 0: pid=84196: Mon Nov 18 13:29:10 2024 00:16:18.101 read: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(255MiB/21771msec) 00:16:18.101 slat (nsec): min=3000, max=26235, avg=3772.61, stdev=978.57 00:16:18.101 clat (usec): min=639, max=462924, avg=33037.11, stdev=19600.47 00:16:18.101 lat (usec): min=643, max=462931, avg=33040.88, stdev=19600.58 00:16:18.101 clat percentiles (msec): 00:16:18.101 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:16:18.101 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:16:18.101 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:16:18.101 | 99.00th=[ 131], 99.50th=[ 146], 99.90th=[ 243], 99.95th=[ 368], 00:16:18.101 | 99.99th=[ 460] 00:16:18.101 write: IOPS=3329, BW=13.0MiB/s (13.6MB/s)(256MiB/19682msec); 0 zone resets 00:16:18.101 slat (usec): min=3, max=1833, avg= 5.52, stdev= 7.80 00:16:18.101 clat (usec): min=323, max=88278, avg=9572.54, stdev=14704.44 00:16:18.101 lat (usec): min=328, max=88283, avg=9578.06, stdev=14704.58 00:16:18.102 clat percentiles (usec): 00:16:18.102 | 1.00th=[ 627], 5.00th=[ 709], 10.00th=[ 783], 20.00th=[ 1037], 00:16:18.102 | 30.00th=[ 2278], 40.00th=[ 3064], 50.00th=[ 3752], 60.00th=[ 4686], 00:16:18.102 | 70.00th=[ 5735], 80.00th=[15008], 90.00th=[25035], 95.00th=[56361], 00:16:18.102 | 99.00th=[62129], 99.50th=[63177], 99.90th=[76022], 99.95th=[79168], 00:16:18.102 | 99.99th=[87557] 00:16:18.102 bw ( KiB/s): min= 32, max=65400, per=89.47%, avg=23834.82, stdev=17496.13, samples=22 00:16:18.102 iops : min= 8, max=16350, avg=5958.68, stdev=4374.04, samples=22 00:16:18.102 lat (usec) : 500=0.03%, 750=3.99%, 1000=5.58% 00:16:18.102 lat (msec) : 2=4.34%, 4=12.95%, 10=11.83%, 20=7.20%, 50=49.03% 00:16:18.102 lat (msec) : 100=4.10%, 250=0.90%, 500=0.05% 00:16:18.102 cpu : usr=99.27%, sys=0.12%, ctx=41, majf=0, minf=5517 00:16:18.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:18.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.102 issued rwts: total=65325,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.102 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.102 00:16:18.102 Run status group 0 (all jobs): 00:16:18.102 READ: bw=23.4MiB/s (24.6MB/s), 11.7MiB/s-11.8MiB/s (12.3MB/s-12.3MB/s), io=510MiB (535MB), run=21645-21771msec 00:16:18.102 WRITE: bw=26.0MiB/s (27.3MB/s), 13.0MiB/s-15.4MiB/s (13.6MB/s-16.1MB/s), io=512MiB (537MB), run=16621-19682msec 00:16:18.102 ----------------------------------------------------- 00:16:18.102 Suppressions used: 00:16:18.102 count bytes template 00:16:18.102 2 10 /usr/src/fio/parse.c 00:16:18.102 2 192 /usr/src/fio/iolog.c 00:16:18.102 1 8 libtcmalloc_minimal.so 00:16:18.102 1 904 libcrypto.so 00:16:18.102 ----------------------------------------------------- 00:16:18.102 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:18.102 13:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:18.102 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:18.102 fio-3.35 00:16:18.102 Starting 1 thread 00:16:33.009 00:16:33.009 test: (groupid=0, jobs=1): err= 0: pid=84486: Mon Nov 18 13:29:27 2024 00:16:33.009 read: IOPS=6458, BW=25.2MiB/s (26.5MB/s)(255MiB/10095msec) 00:16:33.009 slat (usec): min=2, max=379, avg= 5.14, stdev= 3.19 00:16:33.009 clat (usec): min=467, max=51316, avg=19809.56, stdev=3854.81 00:16:33.009 lat (usec): min=474, max=51321, avg=19814.71, stdev=3855.08 00:16:33.009 clat percentiles (usec): 00:16:33.009 | 1.00th=[14222], 5.00th=[14615], 10.00th=[15270], 20.00th=[16319], 00:16:33.009 | 30.00th=[16909], 40.00th=[18220], 50.00th=[19530], 60.00th=[20841], 00:16:33.009 | 70.00th=[21890], 80.00th=[22938], 90.00th=[24511], 95.00th=[26084], 00:16:33.009 | 99.00th=[30802], 99.50th=[32375], 99.90th=[40633], 99.95th=[45876], 00:16:33.009 | 99.99th=[51119] 00:16:33.009 write: IOPS=13.5k, BW=52.8MiB/s (55.4MB/s)(256MiB/4849msec); 0 zone resets 00:16:33.009 slat (usec): min=4, max=2308, avg= 6.00, stdev=10.90 00:16:33.009 clat (usec): min=489, max=45164, avg=9426.51, stdev=9977.52 00:16:33.009 lat (usec): min=494, max=45170, avg=9432.51, stdev=9977.75 00:16:33.009 clat percentiles (usec): 00:16:33.009 | 1.00th=[ 611], 5.00th=[ 685], 10.00th=[ 766], 20.00th=[ 914], 00:16:33.009 | 30.00th=[ 1029], 40.00th=[ 1352], 50.00th=[ 4686], 60.00th=[ 7832], 00:16:33.009 | 70.00th=[15139], 80.00th=[17957], 90.00th=[27132], 95.00th=[28443], 00:16:33.009 | 99.00th=[31065], 99.50th=[33424], 99.90th=[37487], 99.95th=[38536], 00:16:33.009 | 99.99th=[43779] 00:16:33.009 bw ( KiB/s): min=31112, max=91768, per=96.98%, avg=52428.80, stdev=18975.63, samples=10 00:16:33.009 iops : min= 7778, max=22942, avg=13107.20, stdev=4743.91, samples=10 00:16:33.009 lat (usec) : 500=0.01%, 750=4.62%, 1000=9.14% 00:16:33.009 lat (msec) : 2=6.94%, 4=1.57%, 10=8.50%, 20=37.11%, 50=32.12% 00:16:33.009 lat (msec) : 100=0.01% 00:16:33.009 cpu : usr=98.30%, sys=0.29%, ctx=79, majf=0, minf=5577 00:16:33.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:33.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.009 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.009 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.009 00:16:33.009 Run status group 0 (all jobs): 00:16:33.009 READ: bw=25.2MiB/s (26.5MB/s), 25.2MiB/s-25.2MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10095-10095msec 00:16:33.009 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=256MiB (268MB), run=4849-4849msec 00:16:33.009 ----------------------------------------------------- 00:16:33.009 Suppressions used: 00:16:33.009 count bytes template 00:16:33.009 1 5 /usr/src/fio/parse.c 00:16:33.009 2 192 /usr/src/fio/iolog.c 00:16:33.009 1 8 libtcmalloc_minimal.so 00:16:33.009 1 904 libcrypto.so 00:16:33.009 ----------------------------------------------------- 00:16:33.009 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.009 Remove shared memory files 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69507 /dev/shm/spdk_tgt_trace.pid82899 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:33.009 00:16:33.009 real 0m56.964s 00:16:33.009 user 2m6.267s 00:16:33.009 sys 0m2.738s 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.009 ************************************ 00:16:33.009 END TEST ftl_fio_basic 00:16:33.009 ************************************ 00:16:33.009 13:29:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:33.009 13:29:28 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:33.009 13:29:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:33.009 13:29:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.009 13:29:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:33.009 ************************************ 00:16:33.009 START TEST ftl_bdevperf 00:16:33.009 ************************************ 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:33.009 * Looking for test storage... 00:16:33.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.009 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:33.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.010 --rc genhtml_branch_coverage=1 00:16:33.010 --rc genhtml_function_coverage=1 00:16:33.010 --rc genhtml_legend=1 00:16:33.010 --rc geninfo_all_blocks=1 00:16:33.010 --rc geninfo_unexecuted_blocks=1 00:16:33.010 00:16:33.010 ' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:33.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.010 --rc genhtml_branch_coverage=1 00:16:33.010 --rc genhtml_function_coverage=1 00:16:33.010 --rc genhtml_legend=1 00:16:33.010 --rc geninfo_all_blocks=1 00:16:33.010 --rc geninfo_unexecuted_blocks=1 00:16:33.010 00:16:33.010 ' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:33.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.010 --rc genhtml_branch_coverage=1 00:16:33.010 --rc genhtml_function_coverage=1 00:16:33.010 --rc genhtml_legend=1 00:16:33.010 --rc geninfo_all_blocks=1 00:16:33.010 --rc geninfo_unexecuted_blocks=1 00:16:33.010 00:16:33.010 ' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:33.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.010 --rc genhtml_branch_coverage=1 00:16:33.010 --rc genhtml_function_coverage=1 00:16:33.010 --rc genhtml_legend=1 00:16:33.010 --rc geninfo_all_blocks=1 00:16:33.010 --rc geninfo_unexecuted_blocks=1 00:16:33.010 00:16:33.010 ' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84731 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84731 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 84731 ']' 00:16:33.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.010 13:29:28 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:33.010 [2024-11-18 13:29:28.901869] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:16:33.010 [2024-11-18 13:29:28.902015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84731 ] 00:16:33.010 [2024-11-18 13:29:29.061666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.010 [2024-11-18 13:29:29.090681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:33.952 13:29:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:33.952 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:34.213 { 00:16:34.213 "name": "nvme0n1", 00:16:34.213 "aliases": [ 00:16:34.213 "8b562712-8d5e-4fed-b74c-41923d1bed08" 00:16:34.213 ], 00:16:34.213 "product_name": "NVMe disk", 00:16:34.213 "block_size": 4096, 00:16:34.213 "num_blocks": 1310720, 00:16:34.213 "uuid": "8b562712-8d5e-4fed-b74c-41923d1bed08", 00:16:34.213 "numa_id": -1, 00:16:34.213 "assigned_rate_limits": { 00:16:34.213 "rw_ios_per_sec": 0, 00:16:34.213 "rw_mbytes_per_sec": 0, 00:16:34.213 "r_mbytes_per_sec": 0, 00:16:34.213 "w_mbytes_per_sec": 0 00:16:34.213 }, 00:16:34.213 "claimed": true, 00:16:34.213 "claim_type": "read_many_write_one", 00:16:34.213 "zoned": false, 00:16:34.213 "supported_io_types": { 00:16:34.213 "read": true, 00:16:34.213 "write": true, 00:16:34.213 "unmap": true, 00:16:34.213 "flush": true, 00:16:34.213 "reset": true, 00:16:34.213 "nvme_admin": true, 00:16:34.213 "nvme_io": true, 00:16:34.213 "nvme_io_md": false, 00:16:34.213 "write_zeroes": true, 00:16:34.213 "zcopy": false, 00:16:34.213 "get_zone_info": false, 00:16:34.213 "zone_management": false, 00:16:34.213 "zone_append": false, 00:16:34.213 "compare": true, 00:16:34.213 "compare_and_write": false, 00:16:34.213 "abort": true, 00:16:34.213 "seek_hole": false, 00:16:34.213 "seek_data": false, 00:16:34.213 "copy": true, 00:16:34.213 "nvme_iov_md": false 00:16:34.213 }, 00:16:34.213 "driver_specific": { 00:16:34.213 "nvme": [ 00:16:34.213 { 00:16:34.213 "pci_address": "0000:00:11.0", 00:16:34.213 "trid": { 00:16:34.213 "trtype": "PCIe", 00:16:34.213 "traddr": "0000:00:11.0" 00:16:34.213 }, 00:16:34.213 "ctrlr_data": { 00:16:34.213 "cntlid": 0, 00:16:34.213 "vendor_id": "0x1b36", 00:16:34.213 "model_number": "QEMU NVMe Ctrl", 00:16:34.213 "serial_number": "12341", 00:16:34.213 "firmware_revision": "8.0.0", 00:16:34.213 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:34.213 "oacs": { 00:16:34.213 "security": 0, 00:16:34.213 "format": 1, 00:16:34.213 "firmware": 0, 00:16:34.213 "ns_manage": 1 00:16:34.213 }, 00:16:34.213 "multi_ctrlr": false, 00:16:34.213 "ana_reporting": false 00:16:34.213 }, 00:16:34.213 "vs": { 00:16:34.213 "nvme_version": "1.4" 00:16:34.213 }, 00:16:34.213 "ns_data": { 00:16:34.213 "id": 1, 00:16:34.213 "can_share": false 00:16:34.213 } 00:16:34.213 } 00:16:34.213 ], 00:16:34.213 "mp_policy": "active_passive" 00:16:34.213 } 00:16:34.213 } 00:16:34.213 ]' 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:34.213 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:34.474 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e68c5ddd-79bd-4e9f-9151-5839254c75c9 00:16:34.474 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:34.474 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e68c5ddd-79bd-4e9f-9151-5839254c75c9 00:16:34.736 13:29:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:34.997 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=03606c27-255c-44b7-b306-9868cb788015 00:16:34.997 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 03606c27-255c-44b7-b306-9868cb788015 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:35.258 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:35.520 { 00:16:35.520 "name": "2c7a0940-4a63-4dc7-8994-97636867b3f2", 00:16:35.520 "aliases": [ 00:16:35.520 "lvs/nvme0n1p0" 00:16:35.520 ], 00:16:35.520 "product_name": "Logical Volume", 00:16:35.520 "block_size": 4096, 00:16:35.520 "num_blocks": 26476544, 00:16:35.520 "uuid": "2c7a0940-4a63-4dc7-8994-97636867b3f2", 00:16:35.520 "assigned_rate_limits": { 00:16:35.520 "rw_ios_per_sec": 0, 00:16:35.520 "rw_mbytes_per_sec": 0, 00:16:35.520 "r_mbytes_per_sec": 0, 00:16:35.520 "w_mbytes_per_sec": 0 00:16:35.520 }, 00:16:35.520 "claimed": false, 00:16:35.520 "zoned": false, 00:16:35.520 "supported_io_types": { 00:16:35.520 "read": true, 00:16:35.520 "write": true, 00:16:35.520 "unmap": true, 00:16:35.520 "flush": false, 00:16:35.520 "reset": true, 00:16:35.520 "nvme_admin": false, 00:16:35.520 "nvme_io": false, 00:16:35.520 "nvme_io_md": false, 00:16:35.520 "write_zeroes": true, 00:16:35.520 "zcopy": false, 00:16:35.520 "get_zone_info": false, 00:16:35.520 "zone_management": false, 00:16:35.520 "zone_append": false, 00:16:35.520 "compare": false, 00:16:35.520 "compare_and_write": false, 00:16:35.520 "abort": false, 00:16:35.520 "seek_hole": true, 00:16:35.520 "seek_data": true, 00:16:35.520 "copy": false, 00:16:35.520 "nvme_iov_md": false 00:16:35.520 }, 00:16:35.520 "driver_specific": { 00:16:35.520 "lvol": { 00:16:35.520 "lvol_store_uuid": "03606c27-255c-44b7-b306-9868cb788015", 00:16:35.520 "base_bdev": "nvme0n1", 00:16:35.520 "thin_provision": true, 00:16:35.520 "num_allocated_clusters": 0, 00:16:35.520 "snapshot": false, 00:16:35.520 "clone": false, 00:16:35.520 "esnap_clone": false 00:16:35.520 } 00:16:35.520 } 00:16:35.520 } 00:16:35.520 ]' 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:35.520 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:35.782 13:29:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:36.043 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:36.043 { 00:16:36.043 "name": "2c7a0940-4a63-4dc7-8994-97636867b3f2", 00:16:36.043 "aliases": [ 00:16:36.043 "lvs/nvme0n1p0" 00:16:36.043 ], 00:16:36.043 "product_name": "Logical Volume", 00:16:36.043 "block_size": 4096, 00:16:36.043 "num_blocks": 26476544, 00:16:36.043 "uuid": "2c7a0940-4a63-4dc7-8994-97636867b3f2", 00:16:36.043 "assigned_rate_limits": { 00:16:36.043 "rw_ios_per_sec": 0, 00:16:36.043 "rw_mbytes_per_sec": 0, 00:16:36.043 "r_mbytes_per_sec": 0, 00:16:36.043 "w_mbytes_per_sec": 0 00:16:36.043 }, 00:16:36.044 "claimed": false, 00:16:36.044 "zoned": false, 00:16:36.044 "supported_io_types": { 00:16:36.044 "read": true, 00:16:36.044 "write": true, 00:16:36.044 "unmap": true, 00:16:36.044 "flush": false, 00:16:36.044 "reset": true, 00:16:36.044 "nvme_admin": false, 00:16:36.044 "nvme_io": false, 00:16:36.044 "nvme_io_md": false, 00:16:36.044 "write_zeroes": true, 00:16:36.044 "zcopy": false, 00:16:36.044 "get_zone_info": false, 00:16:36.044 "zone_management": false, 00:16:36.044 "zone_append": false, 00:16:36.044 "compare": false, 00:16:36.044 "compare_and_write": false, 00:16:36.044 "abort": false, 00:16:36.044 "seek_hole": true, 00:16:36.044 "seek_data": true, 00:16:36.044 "copy": false, 00:16:36.044 "nvme_iov_md": false 00:16:36.044 }, 00:16:36.044 "driver_specific": { 00:16:36.044 "lvol": { 00:16:36.044 "lvol_store_uuid": "03606c27-255c-44b7-b306-9868cb788015", 00:16:36.044 "base_bdev": "nvme0n1", 00:16:36.044 "thin_provision": true, 00:16:36.044 "num_allocated_clusters": 0, 00:16:36.044 "snapshot": false, 00:16:36.044 "clone": false, 00:16:36.044 "esnap_clone": false 00:16:36.044 } 00:16:36.044 } 00:16:36.044 } 00:16:36.044 ]' 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:36.044 13:29:32 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:36.305 13:29:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:36.305 13:29:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:36.305 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:36.305 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:36.305 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:36.305 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:36.305 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c7a0940-4a63-4dc7-8994-97636867b3f2 00:16:36.565 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:36.565 { 00:16:36.565 "name": "2c7a0940-4a63-4dc7-8994-97636867b3f2", 00:16:36.565 "aliases": [ 00:16:36.565 "lvs/nvme0n1p0" 00:16:36.565 ], 00:16:36.565 "product_name": "Logical Volume", 00:16:36.565 "block_size": 4096, 00:16:36.565 "num_blocks": 26476544, 00:16:36.565 "uuid": "2c7a0940-4a63-4dc7-8994-97636867b3f2", 00:16:36.565 "assigned_rate_limits": { 00:16:36.565 "rw_ios_per_sec": 0, 00:16:36.565 "rw_mbytes_per_sec": 0, 00:16:36.565 "r_mbytes_per_sec": 0, 00:16:36.565 "w_mbytes_per_sec": 0 00:16:36.565 }, 00:16:36.565 "claimed": false, 00:16:36.565 "zoned": false, 00:16:36.565 "supported_io_types": { 00:16:36.565 "read": true, 00:16:36.565 "write": true, 00:16:36.565 "unmap": true, 00:16:36.565 "flush": false, 00:16:36.565 "reset": true, 00:16:36.565 "nvme_admin": false, 00:16:36.566 "nvme_io": false, 00:16:36.566 "nvme_io_md": false, 00:16:36.566 "write_zeroes": true, 00:16:36.566 "zcopy": false, 00:16:36.566 "get_zone_info": false, 00:16:36.566 "zone_management": false, 00:16:36.566 "zone_append": false, 00:16:36.566 "compare": false, 00:16:36.566 "compare_and_write": false, 00:16:36.566 "abort": false, 00:16:36.566 "seek_hole": true, 00:16:36.566 "seek_data": true, 00:16:36.566 "copy": false, 00:16:36.566 "nvme_iov_md": false 00:16:36.566 }, 00:16:36.566 "driver_specific": { 00:16:36.566 "lvol": { 00:16:36.566 "lvol_store_uuid": "03606c27-255c-44b7-b306-9868cb788015", 00:16:36.566 "base_bdev": "nvme0n1", 00:16:36.566 "thin_provision": true, 00:16:36.566 "num_allocated_clusters": 0, 00:16:36.566 "snapshot": false, 00:16:36.566 "clone": false, 00:16:36.566 "esnap_clone": false 00:16:36.566 } 00:16:36.566 } 00:16:36.566 } 00:16:36.566 ]' 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:36.566 13:29:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2c7a0940-4a63-4dc7-8994-97636867b3f2 -c nvc0n1p0 --l2p_dram_limit 20 00:16:36.828 [2024-11-18 13:29:32.761602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.761647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:36.828 [2024-11-18 13:29:32.761663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:36.828 [2024-11-18 13:29:32.761670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.761719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.761726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:36.828 [2024-11-18 13:29:32.761736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:36.828 [2024-11-18 13:29:32.761745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.761763] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:36.828 [2024-11-18 13:29:32.761989] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:36.828 [2024-11-18 13:29:32.762004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.762010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:36.828 [2024-11-18 13:29:32.762019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:16:36.828 [2024-11-18 13:29:32.762027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.762050] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7e3ed2ba-5bcf-49da-bd08-e260daf77b30 00:16:36.828 [2024-11-18 13:29:32.763254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.763284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:36.828 [2024-11-18 13:29:32.763292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:36.828 [2024-11-18 13:29:32.763301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.768949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.768986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:36.828 [2024-11-18 13:29:32.768997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.595 ms 00:16:36.828 [2024-11-18 13:29:32.769007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.769064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.769075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:36.828 [2024-11-18 13:29:32.769082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:36.828 [2024-11-18 13:29:32.769091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.769127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.769135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:36.828 [2024-11-18 13:29:32.769144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:36.828 [2024-11-18 13:29:32.769151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.769179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:36.828 [2024-11-18 13:29:32.770563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.770591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:36.828 [2024-11-18 13:29:32.770602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:16:36.828 [2024-11-18 13:29:32.770608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.770638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.770644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:36.828 [2024-11-18 13:29:32.770653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:36.828 [2024-11-18 13:29:32.770659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.770678] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:36.828 [2024-11-18 13:29:32.770788] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:36.828 [2024-11-18 13:29:32.770804] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:36.828 [2024-11-18 13:29:32.770813] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:36.828 [2024-11-18 13:29:32.770826] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:36.828 [2024-11-18 13:29:32.770832] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:36.828 [2024-11-18 13:29:32.770840] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:36.828 [2024-11-18 13:29:32.770846] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:36.828 [2024-11-18 13:29:32.770854] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:36.828 [2024-11-18 13:29:32.770862] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:36.828 [2024-11-18 13:29:32.770870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.770876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:36.828 [2024-11-18 13:29:32.770883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:36.828 [2024-11-18 13:29:32.770889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.770953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.828 [2024-11-18 13:29:32.770964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:36.828 [2024-11-18 13:29:32.770972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:36.828 [2024-11-18 13:29:32.770978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.828 [2024-11-18 13:29:32.771047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:36.828 [2024-11-18 13:29:32.771061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:36.829 [2024-11-18 13:29:32.771069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:36.829 [2024-11-18 13:29:32.771092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:36.829 [2024-11-18 13:29:32.771111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.829 [2024-11-18 13:29:32.771125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:36.829 [2024-11-18 13:29:32.771130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:36.829 [2024-11-18 13:29:32.771156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.829 [2024-11-18 13:29:32.771162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:36.829 [2024-11-18 13:29:32.771179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:36.829 [2024-11-18 13:29:32.771186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:36.829 [2024-11-18 13:29:32.771198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:36.829 [2024-11-18 13:29:32.771218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:36.829 [2024-11-18 13:29:32.771238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:36.829 [2024-11-18 13:29:32.771258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:36.829 [2024-11-18 13:29:32.771279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:36.829 [2024-11-18 13:29:32.771299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.829 [2024-11-18 13:29:32.771312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:36.829 [2024-11-18 13:29:32.771318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:36.829 [2024-11-18 13:29:32.771327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.829 [2024-11-18 13:29:32.771332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:36.829 [2024-11-18 13:29:32.771339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:36.829 [2024-11-18 13:29:32.771345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:36.829 [2024-11-18 13:29:32.771362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:36.829 [2024-11-18 13:29:32.771369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:36.829 [2024-11-18 13:29:32.771387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:36.829 [2024-11-18 13:29:32.771394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.829 [2024-11-18 13:29:32.771408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:36.829 [2024-11-18 13:29:32.771416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:36.829 [2024-11-18 13:29:32.771421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:36.829 [2024-11-18 13:29:32.771429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:36.829 [2024-11-18 13:29:32.771435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:36.829 [2024-11-18 13:29:32.771442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:36.829 [2024-11-18 13:29:32.771452] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:36.829 [2024-11-18 13:29:32.771463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.829 [2024-11-18 13:29:32.771470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:36.829 [2024-11-18 13:29:32.771478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:36.829 [2024-11-18 13:29:32.771484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:36.829 [2024-11-18 13:29:32.771493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:36.829 [2024-11-18 13:29:32.771499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:36.829 [2024-11-18 13:29:32.771508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:36.829 [2024-11-18 13:29:32.771514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:36.829 [2024-11-18 13:29:32.771522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:36.829 [2024-11-18 13:29:32.771528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:36.829 [2024-11-18 13:29:32.771535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:36.829 [2024-11-18 13:29:32.771541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:36.829 [2024-11-18 13:29:32.771549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:36.829 [2024-11-18 13:29:32.771556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:36.829 [2024-11-18 13:29:32.771564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:36.829 [2024-11-18 13:29:32.771570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:36.829 [2024-11-18 13:29:32.771578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.829 [2024-11-18 13:29:32.771586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:36.829 [2024-11-18 13:29:32.771593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:36.829 [2024-11-18 13:29:32.771599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:36.829 [2024-11-18 13:29:32.771606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:36.829 [2024-11-18 13:29:32.771612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.829 [2024-11-18 13:29:32.771621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:36.829 [2024-11-18 13:29:32.771627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:16:36.829 [2024-11-18 13:29:32.771634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.829 [2024-11-18 13:29:32.771658] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:36.829 [2024-11-18 13:29:32.771671] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:40.129 [2024-11-18 13:29:36.198398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.198496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:40.129 [2024-11-18 13:29:36.198514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3426.724 ms 00:16:40.129 [2024-11-18 13:29:36.198530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.212134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.212211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.129 [2024-11-18 13:29:36.212225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.484 ms 00:16:40.129 [2024-11-18 13:29:36.212239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.212366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.212384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:40.129 [2024-11-18 13:29:36.212393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:40.129 [2024-11-18 13:29:36.212406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.234159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.234239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.129 [2024-11-18 13:29:36.234258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.713 ms 00:16:40.129 [2024-11-18 13:29:36.234270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.234309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.234322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.129 [2024-11-18 13:29:36.234339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:40.129 [2024-11-18 13:29:36.234349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.234963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.235012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.129 [2024-11-18 13:29:36.235024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:16:40.129 [2024-11-18 13:29:36.235039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.235215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.235234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.129 [2024-11-18 13:29:36.235244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:16:40.129 [2024-11-18 13:29:36.235258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.243595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.129 [2024-11-18 13:29:36.243651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.129 [2024-11-18 13:29:36.243665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.319 ms 00:16:40.129 [2024-11-18 13:29:36.243679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.129 [2024-11-18 13:29:36.254043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:40.390 [2024-11-18 13:29:36.261043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.390 [2024-11-18 13:29:36.261085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:40.391 [2024-11-18 13:29:36.261103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.282 ms 00:16:40.391 [2024-11-18 13:29:36.261110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.348385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.348452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:40.391 [2024-11-18 13:29:36.348471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.241 ms 00:16:40.391 [2024-11-18 13:29:36.348481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.348690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.348702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:40.391 [2024-11-18 13:29:36.348714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:16:40.391 [2024-11-18 13:29:36.348721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.354704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.354753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:40.391 [2024-11-18 13:29:36.354766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.958 ms 00:16:40.391 [2024-11-18 13:29:36.354775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.359594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.359639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:40.391 [2024-11-18 13:29:36.359653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:16:40.391 [2024-11-18 13:29:36.359660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.359996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.360009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:40.391 [2024-11-18 13:29:36.360023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:16:40.391 [2024-11-18 13:29:36.360031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.401216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.401269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:40.391 [2024-11-18 13:29:36.401284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.143 ms 00:16:40.391 [2024-11-18 13:29:36.401293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.408056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.408108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:40.391 [2024-11-18 13:29:36.408121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.703 ms 00:16:40.391 [2024-11-18 13:29:36.408130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.413493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.413539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:40.391 [2024-11-18 13:29:36.413551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.295 ms 00:16:40.391 [2024-11-18 13:29:36.413558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.419492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.419546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:40.391 [2024-11-18 13:29:36.419561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.885 ms 00:16:40.391 [2024-11-18 13:29:36.419569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.419624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.419633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:40.391 [2024-11-18 13:29:36.419649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:40.391 [2024-11-18 13:29:36.419656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.419732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.391 [2024-11-18 13:29:36.419742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:40.391 [2024-11-18 13:29:36.419753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:40.391 [2024-11-18 13:29:36.419760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.391 [2024-11-18 13:29:36.420859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3658.778 ms, result 0 00:16:40.391 { 00:16:40.391 "name": "ftl0", 00:16:40.391 "uuid": "7e3ed2ba-5bcf-49da-bd08-e260daf77b30" 00:16:40.391 } 00:16:40.391 13:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:40.391 13:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:40.391 13:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:40.652 13:29:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:40.914 [2024-11-18 13:29:36.827513] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:40.914 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:40.914 Zero copy mechanism will not be used. 00:16:40.914 Running I/O for 4 seconds... 00:16:42.791 660.00 IOPS, 43.83 MiB/s [2024-11-18T13:29:39.863Z] 1587.00 IOPS, 105.39 MiB/s [2024-11-18T13:29:41.245Z] 2283.33 IOPS, 151.63 MiB/s [2024-11-18T13:29:41.245Z] 2610.25 IOPS, 173.34 MiB/s 00:16:45.117 Latency(us) 00:16:45.117 [2024-11-18T13:29:41.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.117 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:45.117 ftl0 : 4.00 2609.62 173.30 0.00 0.00 404.04 141.78 2785.28 00:16:45.117 [2024-11-18T13:29:41.245Z] =================================================================================================================== 00:16:45.117 [2024-11-18T13:29:41.245Z] Total : 2609.62 173.30 0.00 0.00 404.04 141.78 2785.28 00:16:45.118 [2024-11-18 13:29:40.835377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:45.118 { 00:16:45.118 "results": [ 00:16:45.118 { 00:16:45.118 "job": "ftl0", 00:16:45.118 "core_mask": "0x1", 00:16:45.118 "workload": "randwrite", 00:16:45.118 "status": "finished", 00:16:45.118 "queue_depth": 1, 00:16:45.118 "io_size": 69632, 00:16:45.118 "runtime": 4.001348, 00:16:45.118 "iops": 2609.6205578719973, 00:16:45.118 "mibps": 173.29511517118732, 00:16:45.118 "io_failed": 0, 00:16:45.118 "io_timeout": 0, 00:16:45.118 "avg_latency_us": 404.0368193537931, 00:16:45.118 "min_latency_us": 141.7846153846154, 00:16:45.118 "max_latency_us": 2785.28 00:16:45.118 } 00:16:45.118 ], 00:16:45.118 "core_count": 1 00:16:45.118 } 00:16:45.118 13:29:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:45.118 [2024-11-18 13:29:40.942382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:45.118 Running I/O for 4 seconds... 00:16:47.006 7459.00 IOPS, 29.14 MiB/s [2024-11-18T13:29:44.080Z] 6324.00 IOPS, 24.70 MiB/s [2024-11-18T13:29:45.025Z] 6056.00 IOPS, 23.66 MiB/s [2024-11-18T13:29:45.025Z] 5748.75 IOPS, 22.46 MiB/s 00:16:48.897 Latency(us) 00:16:48.897 [2024-11-18T13:29:45.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.897 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.897 ftl0 : 4.04 5728.58 22.38 0.00 0.00 22240.69 226.86 52428.80 00:16:48.897 [2024-11-18T13:29:45.025Z] =================================================================================================================== 00:16:48.897 [2024-11-18T13:29:45.025Z] Total : 5728.58 22.38 0.00 0.00 22240.69 0.00 52428.80 00:16:48.897 [2024-11-18 13:29:44.985399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:48.897 { 00:16:48.897 "results": [ 00:16:48.897 { 00:16:48.897 "job": "ftl0", 00:16:48.897 "core_mask": "0x1", 00:16:48.897 "workload": "randwrite", 00:16:48.897 "status": "finished", 00:16:48.897 "queue_depth": 128, 00:16:48.897 "io_size": 4096, 00:16:48.897 "runtime": 4.036427, 00:16:48.897 "iops": 5728.581242767427, 00:16:48.897 "mibps": 22.377270479560263, 00:16:48.897 "io_failed": 0, 00:16:48.897 "io_timeout": 0, 00:16:48.897 "avg_latency_us": 22240.688523913923, 00:16:48.897 "min_latency_us": 226.85538461538462, 00:16:48.897 "max_latency_us": 52428.8 00:16:48.897 } 00:16:48.897 ], 00:16:48.897 "core_count": 1 00:16:48.897 } 00:16:48.897 13:29:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:49.159 [2024-11-18 13:29:45.103377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:49.159 Running I/O for 4 seconds... 00:16:51.045 4381.00 IOPS, 17.11 MiB/s [2024-11-18T13:29:48.116Z] 4787.00 IOPS, 18.70 MiB/s [2024-11-18T13:29:49.499Z] 5071.33 IOPS, 19.81 MiB/s [2024-11-18T13:29:49.499Z] 5136.25 IOPS, 20.06 MiB/s 00:16:53.371 Latency(us) 00:16:53.371 [2024-11-18T13:29:49.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.371 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:53.371 Verification LBA range: start 0x0 length 0x1400000 00:16:53.371 ftl0 : 4.01 5150.34 20.12 0.00 0.00 24781.33 270.97 69367.34 00:16:53.371 [2024-11-18T13:29:49.499Z] =================================================================================================================== 00:16:53.371 [2024-11-18T13:29:49.499Z] Total : 5150.34 20.12 0.00 0.00 24781.33 0.00 69367.34 00:16:53.371 [2024-11-18 13:29:49.124923] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:53.371 { 00:16:53.371 "results": [ 00:16:53.371 { 00:16:53.371 "job": "ftl0", 00:16:53.371 "core_mask": "0x1", 00:16:53.371 "workload": "verify", 00:16:53.371 "status": "finished", 00:16:53.371 "verify_range": { 00:16:53.371 "start": 0, 00:16:53.371 "length": 20971520 00:16:53.371 }, 00:16:53.371 "queue_depth": 128, 00:16:53.371 "io_size": 4096, 00:16:53.371 "runtime": 4.013136, 00:16:53.371 "iops": 5150.336295605232, 00:16:53.371 "mibps": 20.11850115470794, 00:16:53.371 "io_failed": 0, 00:16:53.371 "io_timeout": 0, 00:16:53.371 "avg_latency_us": 24781.33250881104, 00:16:53.371 "min_latency_us": 270.9661538461539, 00:16:53.371 "max_latency_us": 69367.33538461539 00:16:53.371 } 00:16:53.371 ], 00:16:53.371 "core_count": 1 00:16:53.371 } 00:16:53.371 13:29:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:53.371 [2024-11-18 13:29:49.341272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.371 [2024-11-18 13:29:49.341331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:53.371 [2024-11-18 13:29:49.341348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:53.371 [2024-11-18 13:29:49.341357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.371 [2024-11-18 13:29:49.341383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.372 [2024-11-18 13:29:49.342105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.372 [2024-11-18 13:29:49.342153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:53.372 [2024-11-18 13:29:49.342164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:16:53.372 [2024-11-18 13:29:49.342192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.372 [2024-11-18 13:29:49.345601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.372 [2024-11-18 13:29:49.345654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:53.372 [2024-11-18 13:29:49.345666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:16:53.372 [2024-11-18 13:29:49.345680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.554083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.554152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:53.634 [2024-11-18 13:29:49.554181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 208.385 ms 00:16:53.634 [2024-11-18 13:29:49.554196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.560337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.560387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:53.634 [2024-11-18 13:29:49.560399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:16:53.634 [2024-11-18 13:29:49.560414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.563312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.563366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:53.634 [2024-11-18 13:29:49.563378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:16:53.634 [2024-11-18 13:29:49.563388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.569205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.569263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:53.634 [2024-11-18 13:29:49.569273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.770 ms 00:16:53.634 [2024-11-18 13:29:49.569287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.569416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.569429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:53.634 [2024-11-18 13:29:49.569438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:53.634 [2024-11-18 13:29:49.569448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.572437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.572495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:53.634 [2024-11-18 13:29:49.572505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:16:53.634 [2024-11-18 13:29:49.572514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.575455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.575512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:53.634 [2024-11-18 13:29:49.575522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:16:53.634 [2024-11-18 13:29:49.575536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.577901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.577956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:53.634 [2024-11-18 13:29:49.577966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:16:53.634 [2024-11-18 13:29:49.577978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.580420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.634 [2024-11-18 13:29:49.580474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:53.634 [2024-11-18 13:29:49.580484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.375 ms 00:16:53.634 [2024-11-18 13:29:49.580493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.634 [2024-11-18 13:29:49.580532] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:53.634 [2024-11-18 13:29:49.580550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:53.634 [2024-11-18 13:29:49.580823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.580997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:53.635 [2024-11-18 13:29:49.581494] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:53.635 [2024-11-18 13:29:49.581506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7e3ed2ba-5bcf-49da-bd08-e260daf77b30 00:16:53.635 [2024-11-18 13:29:49.581515] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:53.635 [2024-11-18 13:29:49.581522] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:53.635 [2024-11-18 13:29:49.581532] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:53.635 [2024-11-18 13:29:49.581540] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:53.635 [2024-11-18 13:29:49.581556] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:53.635 [2024-11-18 13:29:49.581564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:53.635 [2024-11-18 13:29:49.581573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:53.635 [2024-11-18 13:29:49.581580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:53.635 [2024-11-18 13:29:49.581590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:53.635 [2024-11-18 13:29:49.581597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.635 [2024-11-18 13:29:49.581606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:53.635 [2024-11-18 13:29:49.581624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:16:53.635 [2024-11-18 13:29:49.581633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.635 [2024-11-18 13:29:49.583927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.635 [2024-11-18 13:29:49.583971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:53.635 [2024-11-18 13:29:49.583982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:16:53.635 [2024-11-18 13:29:49.583993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.635 [2024-11-18 13:29:49.584133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.635 [2024-11-18 13:29:49.584145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:53.635 [2024-11-18 13:29:49.584157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:53.635 [2024-11-18 13:29:49.584201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.635 [2024-11-18 13:29:49.591895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.591948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.636 [2024-11-18 13:29:49.591958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.591968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.592029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.592039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.636 [2024-11-18 13:29:49.592055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.592064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.592134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.592148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.636 [2024-11-18 13:29:49.592159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.592191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.592206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.592216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.636 [2024-11-18 13:29:49.592223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.592239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.605587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.605645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.636 [2024-11-18 13:29:49.605656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.605666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.616229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.636 [2024-11-18 13:29:49.616239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.616253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.616331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.636 [2024-11-18 13:29:49.616340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.616350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.616404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.636 [2024-11-18 13:29:49.616412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.616427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.616511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.636 [2024-11-18 13:29:49.616519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.616529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.616573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:53.636 [2024-11-18 13:29:49.616580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.616590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.616641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.636 [2024-11-18 13:29:49.616653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.616662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.636 [2024-11-18 13:29:49.616718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.636 [2024-11-18 13:29:49.616726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.636 [2024-11-18 13:29:49.616739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.636 [2024-11-18 13:29:49.616879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.569 ms, result 0 00:16:53.636 true 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84731 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 84731 ']' 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 84731 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84731 00:16:53.636 killing process with pid 84731 00:16:53.636 Received shutdown signal, test time was about 4.000000 seconds 00:16:53.636 00:16:53.636 Latency(us) 00:16:53.636 [2024-11-18T13:29:49.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.636 [2024-11-18T13:29:49.764Z] =================================================================================================================== 00:16:53.636 [2024-11-18T13:29:49.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84731' 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 84731 00:16:53.636 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 84731 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:53.898 Remove shared memory files 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:53.898 00:16:53.898 real 0m21.244s 00:16:53.898 user 0m24.056s 00:16:53.898 sys 0m0.932s 00:16:53.898 ************************************ 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.898 13:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:53.898 END TEST ftl_bdevperf 00:16:53.898 ************************************ 00:16:53.898 13:29:49 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:53.898 13:29:49 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:53.898 13:29:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.898 13:29:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:53.898 ************************************ 00:16:53.898 START TEST ftl_trim 00:16:53.898 ************************************ 00:16:53.898 13:29:49 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:54.159 * Looking for test storage... 00:16:54.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.159 13:29:50 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.159 --rc genhtml_branch_coverage=1 00:16:54.159 --rc genhtml_function_coverage=1 00:16:54.159 --rc genhtml_legend=1 00:16:54.159 --rc geninfo_all_blocks=1 00:16:54.159 --rc geninfo_unexecuted_blocks=1 00:16:54.159 00:16:54.159 ' 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.159 --rc genhtml_branch_coverage=1 00:16:54.159 --rc genhtml_function_coverage=1 00:16:54.159 --rc genhtml_legend=1 00:16:54.159 --rc geninfo_all_blocks=1 00:16:54.159 --rc geninfo_unexecuted_blocks=1 00:16:54.159 00:16:54.159 ' 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.159 --rc genhtml_branch_coverage=1 00:16:54.159 --rc genhtml_function_coverage=1 00:16:54.159 --rc genhtml_legend=1 00:16:54.159 --rc geninfo_all_blocks=1 00:16:54.159 --rc geninfo_unexecuted_blocks=1 00:16:54.159 00:16:54.159 ' 00:16:54.159 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:54.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.159 --rc genhtml_branch_coverage=1 00:16:54.159 --rc genhtml_function_coverage=1 00:16:54.159 --rc genhtml_legend=1 00:16:54.159 --rc geninfo_all_blocks=1 00:16:54.159 --rc geninfo_unexecuted_blocks=1 00:16:54.159 00:16:54.159 ' 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:54.159 13:29:50 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85070 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85070 00:16:54.160 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85070 ']' 00:16:54.160 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.160 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.160 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.160 13:29:50 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:54.160 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.160 13:29:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:54.160 [2024-11-18 13:29:50.234265] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:16:54.160 [2024-11-18 13:29:50.234414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85070 ] 00:16:54.421 [2024-11-18 13:29:50.394357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.421 [2024-11-18 13:29:50.425802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.421 [2024-11-18 13:29:50.426063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.421 [2024-11-18 13:29:50.426124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.992 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.992 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:54.992 13:29:51 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:54.992 13:29:51 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:54.992 13:29:51 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:54.992 13:29:51 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:54.992 13:29:51 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:54.992 13:29:51 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:55.254 13:29:51 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:55.254 13:29:51 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:55.254 13:29:51 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:55.254 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:55.254 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:55.515 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:55.515 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:55.515 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:55.515 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:55.515 { 00:16:55.515 "name": "nvme0n1", 00:16:55.515 "aliases": [ 00:16:55.515 "6fc01e7b-c804-42e9-99b7-6b98c0837575" 00:16:55.515 ], 00:16:55.515 "product_name": "NVMe disk", 00:16:55.515 "block_size": 4096, 00:16:55.515 "num_blocks": 1310720, 00:16:55.515 "uuid": "6fc01e7b-c804-42e9-99b7-6b98c0837575", 00:16:55.515 "numa_id": -1, 00:16:55.515 "assigned_rate_limits": { 00:16:55.515 "rw_ios_per_sec": 0, 00:16:55.515 "rw_mbytes_per_sec": 0, 00:16:55.515 "r_mbytes_per_sec": 0, 00:16:55.515 "w_mbytes_per_sec": 0 00:16:55.515 }, 00:16:55.515 "claimed": true, 00:16:55.515 "claim_type": "read_many_write_one", 00:16:55.515 "zoned": false, 00:16:55.515 "supported_io_types": { 00:16:55.515 "read": true, 00:16:55.515 "write": true, 00:16:55.515 "unmap": true, 00:16:55.515 "flush": true, 00:16:55.515 "reset": true, 00:16:55.515 "nvme_admin": true, 00:16:55.515 "nvme_io": true, 00:16:55.515 "nvme_io_md": false, 00:16:55.515 "write_zeroes": true, 00:16:55.515 "zcopy": false, 00:16:55.515 "get_zone_info": false, 00:16:55.515 "zone_management": false, 00:16:55.515 "zone_append": false, 00:16:55.515 "compare": true, 00:16:55.515 "compare_and_write": false, 00:16:55.515 "abort": true, 00:16:55.515 "seek_hole": false, 00:16:55.515 "seek_data": false, 00:16:55.515 "copy": true, 00:16:55.515 "nvme_iov_md": false 00:16:55.515 }, 00:16:55.515 "driver_specific": { 00:16:55.515 "nvme": [ 00:16:55.515 { 00:16:55.515 "pci_address": "0000:00:11.0", 00:16:55.515 "trid": { 00:16:55.515 "trtype": "PCIe", 00:16:55.515 "traddr": "0000:00:11.0" 00:16:55.515 }, 00:16:55.515 "ctrlr_data": { 00:16:55.515 "cntlid": 0, 00:16:55.515 "vendor_id": "0x1b36", 00:16:55.515 "model_number": "QEMU NVMe Ctrl", 00:16:55.515 "serial_number": "12341", 00:16:55.515 "firmware_revision": "8.0.0", 00:16:55.515 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:55.515 "oacs": { 00:16:55.515 "security": 0, 00:16:55.515 "format": 1, 00:16:55.515 "firmware": 0, 00:16:55.515 "ns_manage": 1 00:16:55.515 }, 00:16:55.515 "multi_ctrlr": false, 00:16:55.515 "ana_reporting": false 00:16:55.515 }, 00:16:55.515 "vs": { 00:16:55.515 "nvme_version": "1.4" 00:16:55.515 }, 00:16:55.515 "ns_data": { 00:16:55.515 "id": 1, 00:16:55.515 "can_share": false 00:16:55.515 } 00:16:55.515 } 00:16:55.515 ], 00:16:55.515 "mp_policy": "active_passive" 00:16:55.515 } 00:16:55.515 } 00:16:55.515 ]' 00:16:55.515 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:55.515 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:55.515 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:55.774 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:55.774 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:55.774 13:29:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=03606c27-255c-44b7-b306-9868cb788015 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:55.774 13:29:51 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03606c27-255c-44b7-b306-9868cb788015 00:16:56.035 13:29:52 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:56.296 13:29:52 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=8f5d471d-1909-4e47-9532-580e2a600ded 00:16:56.296 13:29:52 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8f5d471d-1909-4e47-9532-580e2a600ded 00:16:56.557 13:29:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:56.557 13:29:52 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:56.557 13:29:52 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:56.557 13:29:52 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:56.557 13:29:52 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:56.557 13:29:52 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:56.557 13:29:52 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:56.557 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:56.557 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:56.557 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:56.557 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:56.557 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:56.818 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:56.818 { 00:16:56.818 "name": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:16:56.818 "aliases": [ 00:16:56.818 "lvs/nvme0n1p0" 00:16:56.818 ], 00:16:56.818 "product_name": "Logical Volume", 00:16:56.818 "block_size": 4096, 00:16:56.818 "num_blocks": 26476544, 00:16:56.818 "uuid": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:16:56.818 "assigned_rate_limits": { 00:16:56.818 "rw_ios_per_sec": 0, 00:16:56.818 "rw_mbytes_per_sec": 0, 00:16:56.818 "r_mbytes_per_sec": 0, 00:16:56.818 "w_mbytes_per_sec": 0 00:16:56.818 }, 00:16:56.818 "claimed": false, 00:16:56.818 "zoned": false, 00:16:56.818 "supported_io_types": { 00:16:56.818 "read": true, 00:16:56.818 "write": true, 00:16:56.818 "unmap": true, 00:16:56.818 "flush": false, 00:16:56.818 "reset": true, 00:16:56.818 "nvme_admin": false, 00:16:56.818 "nvme_io": false, 00:16:56.818 "nvme_io_md": false, 00:16:56.818 "write_zeroes": true, 00:16:56.818 "zcopy": false, 00:16:56.818 "get_zone_info": false, 00:16:56.818 "zone_management": false, 00:16:56.818 "zone_append": false, 00:16:56.818 "compare": false, 00:16:56.818 "compare_and_write": false, 00:16:56.818 "abort": false, 00:16:56.818 "seek_hole": true, 00:16:56.818 "seek_data": true, 00:16:56.818 "copy": false, 00:16:56.818 "nvme_iov_md": false 00:16:56.818 }, 00:16:56.818 "driver_specific": { 00:16:56.818 "lvol": { 00:16:56.818 "lvol_store_uuid": "8f5d471d-1909-4e47-9532-580e2a600ded", 00:16:56.818 "base_bdev": "nvme0n1", 00:16:56.818 "thin_provision": true, 00:16:56.818 "num_allocated_clusters": 0, 00:16:56.818 "snapshot": false, 00:16:56.818 "clone": false, 00:16:56.818 "esnap_clone": false 00:16:56.818 } 00:16:56.818 } 00:16:56.818 } 00:16:56.818 ]' 00:16:56.818 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:56.818 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:56.818 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:56.818 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:56.818 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:56.818 13:29:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:56.818 13:29:52 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:56.818 13:29:52 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:56.818 13:29:52 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:57.080 13:29:53 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:57.080 13:29:53 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:57.080 13:29:53 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:57.080 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:57.080 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:57.080 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:57.080 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:57.080 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:57.341 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:57.341 { 00:16:57.341 "name": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:16:57.341 "aliases": [ 00:16:57.341 "lvs/nvme0n1p0" 00:16:57.341 ], 00:16:57.341 "product_name": "Logical Volume", 00:16:57.341 "block_size": 4096, 00:16:57.341 "num_blocks": 26476544, 00:16:57.341 "uuid": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:16:57.341 "assigned_rate_limits": { 00:16:57.341 "rw_ios_per_sec": 0, 00:16:57.341 "rw_mbytes_per_sec": 0, 00:16:57.341 "r_mbytes_per_sec": 0, 00:16:57.341 "w_mbytes_per_sec": 0 00:16:57.341 }, 00:16:57.341 "claimed": false, 00:16:57.341 "zoned": false, 00:16:57.341 "supported_io_types": { 00:16:57.341 "read": true, 00:16:57.341 "write": true, 00:16:57.341 "unmap": true, 00:16:57.341 "flush": false, 00:16:57.341 "reset": true, 00:16:57.341 "nvme_admin": false, 00:16:57.341 "nvme_io": false, 00:16:57.341 "nvme_io_md": false, 00:16:57.341 "write_zeroes": true, 00:16:57.341 "zcopy": false, 00:16:57.341 "get_zone_info": false, 00:16:57.341 "zone_management": false, 00:16:57.341 "zone_append": false, 00:16:57.341 "compare": false, 00:16:57.341 "compare_and_write": false, 00:16:57.341 "abort": false, 00:16:57.341 "seek_hole": true, 00:16:57.341 "seek_data": true, 00:16:57.341 "copy": false, 00:16:57.341 "nvme_iov_md": false 00:16:57.341 }, 00:16:57.341 "driver_specific": { 00:16:57.341 "lvol": { 00:16:57.341 "lvol_store_uuid": "8f5d471d-1909-4e47-9532-580e2a600ded", 00:16:57.341 "base_bdev": "nvme0n1", 00:16:57.341 "thin_provision": true, 00:16:57.341 "num_allocated_clusters": 0, 00:16:57.341 "snapshot": false, 00:16:57.341 "clone": false, 00:16:57.341 "esnap_clone": false 00:16:57.341 } 00:16:57.341 } 00:16:57.341 } 00:16:57.341 ]' 00:16:57.341 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:57.341 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:57.341 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:57.341 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:57.341 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:57.341 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:57.341 13:29:53 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:57.341 13:29:53 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:57.602 13:29:53 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:57.602 13:29:53 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:57.602 13:29:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:57.602 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:57.602 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:57.602 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:57.602 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:57.602 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebdff185-32ac-4e5d-8f97-ec46168b1c9b 00:16:57.863 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:57.863 { 00:16:57.863 "name": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:16:57.863 "aliases": [ 00:16:57.863 "lvs/nvme0n1p0" 00:16:57.863 ], 00:16:57.863 "product_name": "Logical Volume", 00:16:57.863 "block_size": 4096, 00:16:57.863 "num_blocks": 26476544, 00:16:57.863 "uuid": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:16:57.863 "assigned_rate_limits": { 00:16:57.863 "rw_ios_per_sec": 0, 00:16:57.863 "rw_mbytes_per_sec": 0, 00:16:57.863 "r_mbytes_per_sec": 0, 00:16:57.863 "w_mbytes_per_sec": 0 00:16:57.863 }, 00:16:57.863 "claimed": false, 00:16:57.863 "zoned": false, 00:16:57.863 "supported_io_types": { 00:16:57.863 "read": true, 00:16:57.863 "write": true, 00:16:57.863 "unmap": true, 00:16:57.863 "flush": false, 00:16:57.863 "reset": true, 00:16:57.863 "nvme_admin": false, 00:16:57.863 "nvme_io": false, 00:16:57.863 "nvme_io_md": false, 00:16:57.863 "write_zeroes": true, 00:16:57.863 "zcopy": false, 00:16:57.863 "get_zone_info": false, 00:16:57.863 "zone_management": false, 00:16:57.863 "zone_append": false, 00:16:57.863 "compare": false, 00:16:57.863 "compare_and_write": false, 00:16:57.863 "abort": false, 00:16:57.863 "seek_hole": true, 00:16:57.863 "seek_data": true, 00:16:57.863 "copy": false, 00:16:57.863 "nvme_iov_md": false 00:16:57.863 }, 00:16:57.863 "driver_specific": { 00:16:57.863 "lvol": { 00:16:57.863 "lvol_store_uuid": "8f5d471d-1909-4e47-9532-580e2a600ded", 00:16:57.863 "base_bdev": "nvme0n1", 00:16:57.863 "thin_provision": true, 00:16:57.863 "num_allocated_clusters": 0, 00:16:57.863 "snapshot": false, 00:16:57.863 "clone": false, 00:16:57.863 "esnap_clone": false 00:16:57.863 } 00:16:57.863 } 00:16:57.863 } 00:16:57.863 ]' 00:16:57.863 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:57.863 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:57.863 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:57.863 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:57.863 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:57.863 13:29:53 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:57.863 13:29:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:57.863 13:29:53 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ebdff185-32ac-4e5d-8f97-ec46168b1c9b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:58.123 [2024-11-18 13:29:54.003558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.003620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.124 [2024-11-18 13:29:54.003635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.124 [2024-11-18 13:29:54.003651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.006528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.006581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.124 [2024-11-18 13:29:54.006593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:16:58.124 [2024-11-18 13:29:54.006606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.006766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.124 [2024-11-18 13:29:54.007038] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.124 [2024-11-18 13:29:54.007065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.007078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.124 [2024-11-18 13:29:54.007091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:16:58.124 [2024-11-18 13:29:54.007112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.007313] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:16:58.124 [2024-11-18 13:29:54.008966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.009015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:58.124 [2024-11-18 13:29:54.009029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:58.124 [2024-11-18 13:29:54.009037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.017836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.017876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.124 [2024-11-18 13:29:54.017891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.694 ms 00:16:58.124 [2024-11-18 13:29:54.017913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.018062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.018073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.124 [2024-11-18 13:29:54.018096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:58.124 [2024-11-18 13:29:54.018119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.018191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.018201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.124 [2024-11-18 13:29:54.018224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:58.124 [2024-11-18 13:29:54.018232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.018279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.124 [2024-11-18 13:29:54.020472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.020656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.124 [2024-11-18 13:29:54.020687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:16:58.124 [2024-11-18 13:29:54.020701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.020757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.020768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.124 [2024-11-18 13:29:54.020777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:58.124 [2024-11-18 13:29:54.020797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.020827] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:58.124 [2024-11-18 13:29:54.020977] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:58.124 [2024-11-18 13:29:54.020989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.124 [2024-11-18 13:29:54.021003] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:58.124 [2024-11-18 13:29:54.021026] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021038] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021046] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.124 [2024-11-18 13:29:54.021056] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.124 [2024-11-18 13:29:54.021064] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:58.124 [2024-11-18 13:29:54.021075] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:58.124 [2024-11-18 13:29:54.021086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.021095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.124 [2024-11-18 13:29:54.021104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:58.124 [2024-11-18 13:29:54.021114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.021245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.124 [2024-11-18 13:29:54.021261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.124 [2024-11-18 13:29:54.021271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:58.124 [2024-11-18 13:29:54.021282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.124 [2024-11-18 13:29:54.021416] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.124 [2024-11-18 13:29:54.021431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.124 [2024-11-18 13:29:54.021441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.124 [2024-11-18 13:29:54.021471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.124 [2024-11-18 13:29:54.021497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.124 [2024-11-18 13:29:54.021513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.124 [2024-11-18 13:29:54.021523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.124 [2024-11-18 13:29:54.021532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.124 [2024-11-18 13:29:54.021546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.124 [2024-11-18 13:29:54.021554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:58.124 [2024-11-18 13:29:54.021564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.124 [2024-11-18 13:29:54.021581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.124 [2024-11-18 13:29:54.021606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.124 [2024-11-18 13:29:54.021631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.124 [2024-11-18 13:29:54.021653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.124 [2024-11-18 13:29:54.021682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.124 [2024-11-18 13:29:54.021705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.124 [2024-11-18 13:29:54.021720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.124 [2024-11-18 13:29:54.021729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:58.124 [2024-11-18 13:29:54.021735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.124 [2024-11-18 13:29:54.021744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:58.124 [2024-11-18 13:29:54.021750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:58.124 [2024-11-18 13:29:54.021759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:58.124 [2024-11-18 13:29:54.021775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:58.124 [2024-11-18 13:29:54.021782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021790] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.124 [2024-11-18 13:29:54.021810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.124 [2024-11-18 13:29:54.021820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.124 [2024-11-18 13:29:54.021828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.124 [2024-11-18 13:29:54.021849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.125 [2024-11-18 13:29:54.021857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.125 [2024-11-18 13:29:54.021866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.125 [2024-11-18 13:29:54.021872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.125 [2024-11-18 13:29:54.021881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.125 [2024-11-18 13:29:54.021888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.125 [2024-11-18 13:29:54.021900] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.125 [2024-11-18 13:29:54.021909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.125 [2024-11-18 13:29:54.021920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.125 [2024-11-18 13:29:54.021928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:58.125 [2024-11-18 13:29:54.021937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.125 [2024-11-18 13:29:54.021944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:58.125 [2024-11-18 13:29:54.021953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:58.125 [2024-11-18 13:29:54.021961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:58.125 [2024-11-18 13:29:54.021974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:58.125 [2024-11-18 13:29:54.021982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:58.125 [2024-11-18 13:29:54.021992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:58.125 [2024-11-18 13:29:54.021999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:58.125 [2024-11-18 13:29:54.022008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:58.125 [2024-11-18 13:29:54.022015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:58.125 [2024-11-18 13:29:54.022025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:58.125 [2024-11-18 13:29:54.022032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:58.125 [2024-11-18 13:29:54.022041] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.125 [2024-11-18 13:29:54.022049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.125 [2024-11-18 13:29:54.022062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.125 [2024-11-18 13:29:54.022070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.125 [2024-11-18 13:29:54.022079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.125 [2024-11-18 13:29:54.022086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.125 [2024-11-18 13:29:54.022096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.125 [2024-11-18 13:29:54.022104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.125 [2024-11-18 13:29:54.022132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:16:58.125 [2024-11-18 13:29:54.022140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.125 [2024-11-18 13:29:54.022268] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:58.125 [2024-11-18 13:29:54.022280] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:00.654 [2024-11-18 13:29:56.162087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.162310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:00.654 [2024-11-18 13:29:56.162379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2139.806 ms 00:17:00.654 [2024-11-18 13:29:56.162407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.170467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.170621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.654 [2024-11-18 13:29:56.170683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.914 ms 00:17:00.654 [2024-11-18 13:29:56.170706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.170869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.170950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:00.654 [2024-11-18 13:29:56.171004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:00.654 [2024-11-18 13:29:56.171029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.186570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.186717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.654 [2024-11-18 13:29:56.186784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.479 ms 00:17:00.654 [2024-11-18 13:29:56.186808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.186909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.186988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.654 [2024-11-18 13:29:56.187029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:00.654 [2024-11-18 13:29:56.187049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.187406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.187506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.654 [2024-11-18 13:29:56.187565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:17:00.654 [2024-11-18 13:29:56.187587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.187724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.187755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.654 [2024-11-18 13:29:56.187804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:00.654 [2024-11-18 13:29:56.187857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.193094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.193207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.654 [2024-11-18 13:29:56.193262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:17:00.654 [2024-11-18 13:29:56.193284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.210124] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:00.654 [2024-11-18 13:29:56.223846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.223955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:00.654 [2024-11-18 13:29:56.224007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.031 ms 00:17:00.654 [2024-11-18 13:29:56.224032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.275691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.275848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:00.654 [2024-11-18 13:29:56.275938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.582 ms 00:17:00.654 [2024-11-18 13:29:56.275984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.276268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.276372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:00.654 [2024-11-18 13:29:56.276440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:17:00.654 [2024-11-18 13:29:56.276491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.279421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.279537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:00.654 [2024-11-18 13:29:56.279609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:17:00.654 [2024-11-18 13:29:56.279648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.282422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.282545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:00.654 [2024-11-18 13:29:56.282651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:17:00.654 [2024-11-18 13:29:56.282724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.283098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.283224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:00.654 [2024-11-18 13:29:56.283295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:00.654 [2024-11-18 13:29:56.283338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.311839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.311973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:00.654 [2024-11-18 13:29:56.312078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.413 ms 00:17:00.654 [2024-11-18 13:29:56.312117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.315764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.315879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:00.654 [2024-11-18 13:29:56.315944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.519 ms 00:17:00.654 [2024-11-18 13:29:56.315970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.318948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.319051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:00.654 [2024-11-18 13:29:56.319106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.928 ms 00:17:00.654 [2024-11-18 13:29:56.319120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.654 [2024-11-18 13:29:56.322265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.654 [2024-11-18 13:29:56.322300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:00.654 [2024-11-18 13:29:56.322309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:17:00.654 [2024-11-18 13:29:56.322321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.655 [2024-11-18 13:29:56.322379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.655 [2024-11-18 13:29:56.322391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:00.655 [2024-11-18 13:29:56.322399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:00.655 [2024-11-18 13:29:56.322408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.655 [2024-11-18 13:29:56.322477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.655 [2024-11-18 13:29:56.322487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:00.655 [2024-11-18 13:29:56.322495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:00.655 [2024-11-18 13:29:56.322514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.655 [2024-11-18 13:29:56.323355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:00.655 [2024-11-18 13:29:56.324354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2319.542 ms, result 0 00:17:00.655 [2024-11-18 13:29:56.324976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.655 { 00:17:00.655 "name": "ftl0", 00:17:00.655 "uuid": "8e85e72f-8360-491c-ab6b-b1556b7ece1e" 00:17:00.655 } 00:17:00.655 13:29:56 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:00.655 [ 00:17:00.655 { 00:17:00.655 "name": "ftl0", 00:17:00.655 "aliases": [ 00:17:00.655 "8e85e72f-8360-491c-ab6b-b1556b7ece1e" 00:17:00.655 ], 00:17:00.655 "product_name": "FTL disk", 00:17:00.655 "block_size": 4096, 00:17:00.655 "num_blocks": 23592960, 00:17:00.655 "uuid": "8e85e72f-8360-491c-ab6b-b1556b7ece1e", 00:17:00.655 "assigned_rate_limits": { 00:17:00.655 "rw_ios_per_sec": 0, 00:17:00.655 "rw_mbytes_per_sec": 0, 00:17:00.655 "r_mbytes_per_sec": 0, 00:17:00.655 "w_mbytes_per_sec": 0 00:17:00.655 }, 00:17:00.655 "claimed": false, 00:17:00.655 "zoned": false, 00:17:00.655 "supported_io_types": { 00:17:00.655 "read": true, 00:17:00.655 "write": true, 00:17:00.655 "unmap": true, 00:17:00.655 "flush": true, 00:17:00.655 "reset": false, 00:17:00.655 "nvme_admin": false, 00:17:00.655 "nvme_io": false, 00:17:00.655 "nvme_io_md": false, 00:17:00.655 "write_zeroes": true, 00:17:00.655 "zcopy": false, 00:17:00.655 "get_zone_info": false, 00:17:00.655 "zone_management": false, 00:17:00.655 "zone_append": false, 00:17:00.655 "compare": false, 00:17:00.655 "compare_and_write": false, 00:17:00.655 "abort": false, 00:17:00.655 "seek_hole": false, 00:17:00.655 "seek_data": false, 00:17:00.655 "copy": false, 00:17:00.655 "nvme_iov_md": false 00:17:00.655 }, 00:17:00.655 "driver_specific": { 00:17:00.655 "ftl": { 00:17:00.655 "base_bdev": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:17:00.655 "cache": "nvc0n1p0" 00:17:00.655 } 00:17:00.655 } 00:17:00.655 } 00:17:00.655 ] 00:17:00.655 13:29:56 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:17:00.655 13:29:56 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:00.655 13:29:56 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:00.913 13:29:56 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:00.913 13:29:56 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:01.172 13:29:57 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:01.172 { 00:17:01.172 "name": "ftl0", 00:17:01.172 "aliases": [ 00:17:01.172 "8e85e72f-8360-491c-ab6b-b1556b7ece1e" 00:17:01.172 ], 00:17:01.172 "product_name": "FTL disk", 00:17:01.172 "block_size": 4096, 00:17:01.172 "num_blocks": 23592960, 00:17:01.172 "uuid": "8e85e72f-8360-491c-ab6b-b1556b7ece1e", 00:17:01.172 "assigned_rate_limits": { 00:17:01.172 "rw_ios_per_sec": 0, 00:17:01.172 "rw_mbytes_per_sec": 0, 00:17:01.172 "r_mbytes_per_sec": 0, 00:17:01.172 "w_mbytes_per_sec": 0 00:17:01.172 }, 00:17:01.172 "claimed": false, 00:17:01.172 "zoned": false, 00:17:01.172 "supported_io_types": { 00:17:01.172 "read": true, 00:17:01.172 "write": true, 00:17:01.172 "unmap": true, 00:17:01.172 "flush": true, 00:17:01.172 "reset": false, 00:17:01.172 "nvme_admin": false, 00:17:01.172 "nvme_io": false, 00:17:01.172 "nvme_io_md": false, 00:17:01.172 "write_zeroes": true, 00:17:01.172 "zcopy": false, 00:17:01.172 "get_zone_info": false, 00:17:01.172 "zone_management": false, 00:17:01.172 "zone_append": false, 00:17:01.172 "compare": false, 00:17:01.172 "compare_and_write": false, 00:17:01.172 "abort": false, 00:17:01.172 "seek_hole": false, 00:17:01.172 "seek_data": false, 00:17:01.172 "copy": false, 00:17:01.172 "nvme_iov_md": false 00:17:01.172 }, 00:17:01.172 "driver_specific": { 00:17:01.172 "ftl": { 00:17:01.172 "base_bdev": "ebdff185-32ac-4e5d-8f97-ec46168b1c9b", 00:17:01.172 "cache": "nvc0n1p0" 00:17:01.172 } 00:17:01.172 } 00:17:01.172 } 00:17:01.172 ]' 00:17:01.172 13:29:57 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:01.172 13:29:57 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:01.172 13:29:57 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:01.432 [2024-11-18 13:29:57.352952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.432 [2024-11-18 13:29:57.352993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:01.432 [2024-11-18 13:29:57.353007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.432 [2024-11-18 13:29:57.353015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.432 [2024-11-18 13:29:57.353053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:01.432 [2024-11-18 13:29:57.353502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.432 [2024-11-18 13:29:57.353536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:01.432 [2024-11-18 13:29:57.353546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:17:01.432 [2024-11-18 13:29:57.353555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.432 [2024-11-18 13:29:57.354020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.432 [2024-11-18 13:29:57.354034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:01.432 [2024-11-18 13:29:57.354043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:17:01.432 [2024-11-18 13:29:57.354052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.432 [2024-11-18 13:29:57.357705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.432 [2024-11-18 13:29:57.357728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:01.432 [2024-11-18 13:29:57.357737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:17:01.432 [2024-11-18 13:29:57.357747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.432 [2024-11-18 13:29:57.364714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.432 [2024-11-18 13:29:57.364744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:01.432 [2024-11-18 13:29:57.364756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.919 ms 00:17:01.432 [2024-11-18 13:29:57.364767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.366088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.433 [2024-11-18 13:29:57.366124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:01.433 [2024-11-18 13:29:57.366132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:17:01.433 [2024-11-18 13:29:57.366141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.369892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.433 [2024-11-18 13:29:57.369940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:01.433 [2024-11-18 13:29:57.369950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:17:01.433 [2024-11-18 13:29:57.369969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.370140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.433 [2024-11-18 13:29:57.370155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:01.433 [2024-11-18 13:29:57.370163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:01.433 [2024-11-18 13:29:57.370187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.371764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.433 [2024-11-18 13:29:57.371885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:01.433 [2024-11-18 13:29:57.371899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:17:01.433 [2024-11-18 13:29:57.371910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.373142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.433 [2024-11-18 13:29:57.373187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:01.433 [2024-11-18 13:29:57.373196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:17:01.433 [2024-11-18 13:29:57.373205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.374348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.433 [2024-11-18 13:29:57.374380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:01.433 [2024-11-18 13:29:57.374388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:17:01.433 [2024-11-18 13:29:57.374397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.375405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.433 [2024-11-18 13:29:57.375437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:01.433 [2024-11-18 13:29:57.375445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:17:01.433 [2024-11-18 13:29:57.375454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.433 [2024-11-18 13:29:57.375495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:01.433 [2024-11-18 13:29:57.375510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.375991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:01.433 [2024-11-18 13:29:57.376072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:01.434 [2024-11-18 13:29:57.376391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:01.434 [2024-11-18 13:29:57.376398] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:17:01.434 [2024-11-18 13:29:57.376407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:01.434 [2024-11-18 13:29:57.376414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:01.434 [2024-11-18 13:29:57.376422] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:01.434 [2024-11-18 13:29:57.376433] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:01.434 [2024-11-18 13:29:57.376441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:01.434 [2024-11-18 13:29:57.376448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:01.434 [2024-11-18 13:29:57.376457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:01.434 [2024-11-18 13:29:57.376463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:01.434 [2024-11-18 13:29:57.376470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:01.434 [2024-11-18 13:29:57.376477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.434 [2024-11-18 13:29:57.376486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:01.434 [2024-11-18 13:29:57.376493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:17:01.434 [2024-11-18 13:29:57.376503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.377923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.434 [2024-11-18 13:29:57.377949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:01.434 [2024-11-18 13:29:57.377957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:17:01.434 [2024-11-18 13:29:57.377966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.378068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.434 [2024-11-18 13:29:57.378079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:01.434 [2024-11-18 13:29:57.378098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:01.434 [2024-11-18 13:29:57.378106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.383344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.383455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.434 [2024-11-18 13:29:57.383515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.383540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.383625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.383652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.434 [2024-11-18 13:29:57.383697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.383760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.383842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.383896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.434 [2024-11-18 13:29:57.383946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.383970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.384011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.384157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.434 [2024-11-18 13:29:57.384200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.384314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.393139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.393339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.434 [2024-11-18 13:29:57.393396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.393421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.400869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.400991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.434 [2024-11-18 13:29:57.401043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.401070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.401132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.401222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.434 [2024-11-18 13:29:57.401276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.401300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.401362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.401464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.434 [2024-11-18 13:29:57.401489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.401509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.401600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.401621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.434 [2024-11-18 13:29:57.401629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.401640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.401683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.401695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:01.434 [2024-11-18 13:29:57.401703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.401714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.401766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.434 [2024-11-18 13:29:57.401776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.434 [2024-11-18 13:29:57.401784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.434 [2024-11-18 13:29:57.401795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.434 [2024-11-18 13:29:57.401849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.435 [2024-11-18 13:29:57.401860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.435 [2024-11-18 13:29:57.401868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.435 [2024-11-18 13:29:57.401876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.435 [2024-11-18 13:29:57.402034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.075 ms, result 0 00:17:01.435 true 00:17:01.435 13:29:57 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85070 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85070 ']' 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85070 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85070 00:17:01.435 killing process with pid 85070 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85070' 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85070 00:17:01.435 13:29:57 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85070 00:17:06.815 13:30:01 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:06.815 65536+0 records in 00:17:06.815 65536+0 records out 00:17:06.815 268435456 bytes (268 MB, 256 MiB) copied, 0.80112 s, 335 MB/s 00:17:06.815 13:30:02 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:06.815 [2024-11-18 13:30:02.846882] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:17:06.815 [2024-11-18 13:30:02.847000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85226 ] 00:17:07.077 [2024-11-18 13:30:03.003377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.077 [2024-11-18 13:30:03.032062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.077 [2024-11-18 13:30:03.142656] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:07.077 [2024-11-18 13:30:03.142968] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:07.341 [2024-11-18 13:30:03.303594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.303653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:07.341 [2024-11-18 13:30:03.303669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:07.341 [2024-11-18 13:30:03.303678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.306274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.306328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.341 [2024-11-18 13:30:03.306340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:17:07.341 [2024-11-18 13:30:03.306347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.306459] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:07.341 [2024-11-18 13:30:03.306723] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:07.341 [2024-11-18 13:30:03.306738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.306749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.341 [2024-11-18 13:30:03.306760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:07.341 [2024-11-18 13:30:03.306768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.308578] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:07.341 [2024-11-18 13:30:03.312264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.312312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:07.341 [2024-11-18 13:30:03.312328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:17:07.341 [2024-11-18 13:30:03.312336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.312419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.312430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:07.341 [2024-11-18 13:30:03.312439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:07.341 [2024-11-18 13:30:03.312447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.320608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.320654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.341 [2024-11-18 13:30:03.320665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.116 ms 00:17:07.341 [2024-11-18 13:30:03.320682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.320824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.320836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.341 [2024-11-18 13:30:03.320846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:07.341 [2024-11-18 13:30:03.320853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.320885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.320894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:07.341 [2024-11-18 13:30:03.320908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:07.341 [2024-11-18 13:30:03.320916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.320936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:07.341 [2024-11-18 13:30:03.322976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.323207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.341 [2024-11-18 13:30:03.323236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:17:07.341 [2024-11-18 13:30:03.323244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.323297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.341 [2024-11-18 13:30:03.323306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:07.341 [2024-11-18 13:30:03.323314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:07.341 [2024-11-18 13:30:03.323322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.341 [2024-11-18 13:30:03.323341] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:07.341 [2024-11-18 13:30:03.323364] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:07.341 [2024-11-18 13:30:03.323407] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:07.341 [2024-11-18 13:30:03.323433] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:07.342 [2024-11-18 13:30:03.323543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:07.342 [2024-11-18 13:30:03.323554] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:07.342 [2024-11-18 13:30:03.323564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:07.342 [2024-11-18 13:30:03.323575] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:07.342 [2024-11-18 13:30:03.323584] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:07.342 [2024-11-18 13:30:03.323592] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:07.342 [2024-11-18 13:30:03.323604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:07.342 [2024-11-18 13:30:03.323611] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:07.342 [2024-11-18 13:30:03.323621] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:07.342 [2024-11-18 13:30:03.323635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.342 [2024-11-18 13:30:03.323643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:07.342 [2024-11-18 13:30:03.323651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:07.342 [2024-11-18 13:30:03.323658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.342 [2024-11-18 13:30:03.323746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.342 [2024-11-18 13:30:03.323755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:07.342 [2024-11-18 13:30:03.323764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:07.342 [2024-11-18 13:30:03.323772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.342 [2024-11-18 13:30:03.323881] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:07.342 [2024-11-18 13:30:03.323892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:07.342 [2024-11-18 13:30:03.323904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.342 [2024-11-18 13:30:03.323919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.342 [2024-11-18 13:30:03.323928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:07.342 [2024-11-18 13:30:03.323937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:07.342 [2024-11-18 13:30:03.323945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:07.342 [2024-11-18 13:30:03.323955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:07.342 [2024-11-18 13:30:03.323965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:07.342 [2024-11-18 13:30:03.323972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.342 [2024-11-18 13:30:03.323980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:07.342 [2024-11-18 13:30:03.323989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:07.342 [2024-11-18 13:30:03.323997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.342 [2024-11-18 13:30:03.324004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:07.342 [2024-11-18 13:30:03.324012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:07.342 [2024-11-18 13:30:03.324020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:07.342 [2024-11-18 13:30:03.324036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:07.342 [2024-11-18 13:30:03.324045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:07.342 [2024-11-18 13:30:03.324061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.342 [2024-11-18 13:30:03.324078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:07.342 [2024-11-18 13:30:03.324090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.342 [2024-11-18 13:30:03.324104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:07.342 [2024-11-18 13:30:03.324112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.342 [2024-11-18 13:30:03.324125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:07.342 [2024-11-18 13:30:03.324132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.342 [2024-11-18 13:30:03.324147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:07.342 [2024-11-18 13:30:03.324154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.342 [2024-11-18 13:30:03.324191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:07.342 [2024-11-18 13:30:03.324199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:07.342 [2024-11-18 13:30:03.324206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.342 [2024-11-18 13:30:03.324214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:07.342 [2024-11-18 13:30:03.324220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:07.342 [2024-11-18 13:30:03.324229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:07.342 [2024-11-18 13:30:03.324243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:07.342 [2024-11-18 13:30:03.324250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324257] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:07.342 [2024-11-18 13:30:03.324265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:07.342 [2024-11-18 13:30:03.324273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.342 [2024-11-18 13:30:03.324284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.342 [2024-11-18 13:30:03.324295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:07.342 [2024-11-18 13:30:03.324303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:07.342 [2024-11-18 13:30:03.324310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:07.342 [2024-11-18 13:30:03.324318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:07.342 [2024-11-18 13:30:03.324325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:07.342 [2024-11-18 13:30:03.324332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:07.342 [2024-11-18 13:30:03.324341] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:07.342 [2024-11-18 13:30:03.324354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.342 [2024-11-18 13:30:03.324365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:07.342 [2024-11-18 13:30:03.324373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:07.342 [2024-11-18 13:30:03.324388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:07.342 [2024-11-18 13:30:03.324395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:07.342 [2024-11-18 13:30:03.324402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:07.342 [2024-11-18 13:30:03.324410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:07.343 [2024-11-18 13:30:03.324417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:07.343 [2024-11-18 13:30:03.324424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:07.343 [2024-11-18 13:30:03.324431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:07.343 [2024-11-18 13:30:03.324438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:07.343 [2024-11-18 13:30:03.324445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:07.343 [2024-11-18 13:30:03.324452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:07.343 [2024-11-18 13:30:03.324459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:07.343 [2024-11-18 13:30:03.324466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:07.343 [2024-11-18 13:30:03.324473] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:07.343 [2024-11-18 13:30:03.324487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.343 [2024-11-18 13:30:03.324501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:07.343 [2024-11-18 13:30:03.324508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:07.343 [2024-11-18 13:30:03.324516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:07.343 [2024-11-18 13:30:03.324522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:07.343 [2024-11-18 13:30:03.324530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.324537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:07.343 [2024-11-18 13:30:03.324544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:17:07.343 [2024-11-18 13:30:03.324557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.338827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.339021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.343 [2024-11-18 13:30:03.339041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.217 ms 00:17:07.343 [2024-11-18 13:30:03.339050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.339233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.339253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:07.343 [2024-11-18 13:30:03.339263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:07.343 [2024-11-18 13:30:03.339271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.358943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.358997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.343 [2024-11-18 13:30:03.359010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.645 ms 00:17:07.343 [2024-11-18 13:30:03.359018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.359109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.359129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.343 [2024-11-18 13:30:03.359138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:07.343 [2024-11-18 13:30:03.359203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.359687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.359722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.343 [2024-11-18 13:30:03.359734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:17:07.343 [2024-11-18 13:30:03.359743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.359929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.359950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.343 [2024-11-18 13:30:03.359966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:17:07.343 [2024-11-18 13:30:03.359976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.368318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.368371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.343 [2024-11-18 13:30:03.368387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.314 ms 00:17:07.343 [2024-11-18 13:30:03.368396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.372215] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:07.343 [2024-11-18 13:30:03.372268] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:07.343 [2024-11-18 13:30:03.372280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.372289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:07.343 [2024-11-18 13:30:03.372298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.776 ms 00:17:07.343 [2024-11-18 13:30:03.372305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.387621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.387680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:07.343 [2024-11-18 13:30:03.387693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.252 ms 00:17:07.343 [2024-11-18 13:30:03.387700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.390263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.390307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:07.343 [2024-11-18 13:30:03.390318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:17:07.343 [2024-11-18 13:30:03.390325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.392742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.392790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:07.343 [2024-11-18 13:30:03.392800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:17:07.343 [2024-11-18 13:30:03.392807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.393211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.393229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:07.343 [2024-11-18 13:30:03.393244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:07.343 [2024-11-18 13:30:03.393251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.415792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.415858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:07.343 [2024-11-18 13:30:03.415872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.514 ms 00:17:07.343 [2024-11-18 13:30:03.415881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.423898] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:07.343 [2024-11-18 13:30:03.442340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.442388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:07.343 [2024-11-18 13:30:03.442402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.364 ms 00:17:07.343 [2024-11-18 13:30:03.442411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.442502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.442513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:07.343 [2024-11-18 13:30:03.442523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:07.343 [2024-11-18 13:30:03.442532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.442591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.343 [2024-11-18 13:30:03.442601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:07.343 [2024-11-18 13:30:03.442610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:07.343 [2024-11-18 13:30:03.442625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.343 [2024-11-18 13:30:03.442652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.344 [2024-11-18 13:30:03.442661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:07.344 [2024-11-18 13:30:03.442671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:07.344 [2024-11-18 13:30:03.442679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.344 [2024-11-18 13:30:03.442715] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:07.344 [2024-11-18 13:30:03.442728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.344 [2024-11-18 13:30:03.442736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:07.344 [2024-11-18 13:30:03.442745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:07.344 [2024-11-18 13:30:03.442753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.344 [2024-11-18 13:30:03.448360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.344 [2024-11-18 13:30:03.448409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:07.344 [2024-11-18 13:30:03.448420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.586 ms 00:17:07.344 [2024-11-18 13:30:03.448428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.344 [2024-11-18 13:30:03.448527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.344 [2024-11-18 13:30:03.448541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:07.344 [2024-11-18 13:30:03.448551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:07.344 [2024-11-18 13:30:03.448559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.344 [2024-11-18 13:30:03.449577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.344 [2024-11-18 13:30:03.450955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.666 ms, result 0 00:17:07.344 [2024-11-18 13:30:03.452613] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.344 [2024-11-18 13:30:03.459600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.733  [2024-11-18T13:30:05.805Z] Copying: 17/256 [MB] (17 MBps) [2024-11-18T13:30:06.748Z] Copying: 38/256 [MB] (21 MBps) [2024-11-18T13:30:07.691Z] Copying: 58/256 [MB] (19 MBps) [2024-11-18T13:30:08.632Z] Copying: 73/256 [MB] (15 MBps) [2024-11-18T13:30:09.572Z] Copying: 92/256 [MB] (18 MBps) [2024-11-18T13:30:10.525Z] Copying: 119/256 [MB] (26 MBps) [2024-11-18T13:30:11.465Z] Copying: 146/256 [MB] (26 MBps) [2024-11-18T13:30:12.850Z] Copying: 173/256 [MB] (27 MBps) [2024-11-18T13:30:13.795Z] Copying: 192/256 [MB] (19 MBps) [2024-11-18T13:30:14.740Z] Copying: 210/256 [MB] (18 MBps) [2024-11-18T13:30:15.683Z] Copying: 229/256 [MB] (18 MBps) [2024-11-18T13:30:16.257Z] Copying: 246/256 [MB] (17 MBps) [2024-11-18T13:30:16.257Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-18 13:30:16.054236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.129 [2024-11-18 13:30:16.055368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.055403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.129 [2024-11-18 13:30:16.055420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:20.129 [2024-11-18 13:30:16.055433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.055453] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.129 [2024-11-18 13:30:16.055854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.055867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.129 [2024-11-18 13:30:16.055877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:17:20.129 [2024-11-18 13:30:16.055884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.057668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.057699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.129 [2024-11-18 13:30:16.057708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.766 ms 00:17:20.129 [2024-11-18 13:30:16.057715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.064442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.064472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.129 [2024-11-18 13:30:16.064481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.706 ms 00:17:20.129 [2024-11-18 13:30:16.064488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.071409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.071533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.129 [2024-11-18 13:30:16.071548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:17:20.129 [2024-11-18 13:30:16.071558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.073831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.073861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.129 [2024-11-18 13:30:16.073870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:17:20.129 [2024-11-18 13:30:16.073876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.077417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.077449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.129 [2024-11-18 13:30:16.077467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 00:17:20.129 [2024-11-18 13:30:16.077474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.077589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.077602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.129 [2024-11-18 13:30:16.077610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:20.129 [2024-11-18 13:30:16.077617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.080103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.080132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.129 [2024-11-18 13:30:16.080140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.467 ms 00:17:20.129 [2024-11-18 13:30:16.080147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.082259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.082367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.129 [2024-11-18 13:30:16.082380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:17:20.129 [2024-11-18 13:30:16.082387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.083973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.084001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.129 [2024-11-18 13:30:16.084010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.557 ms 00:17:20.129 [2024-11-18 13:30:16.084016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.085370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.129 [2024-11-18 13:30:16.085400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.129 [2024-11-18 13:30:16.085408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.301 ms 00:17:20.129 [2024-11-18 13:30:16.085414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.129 [2024-11-18 13:30:16.085444] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.129 [2024-11-18 13:30:16.085456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.129 [2024-11-18 13:30:16.085682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.085997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.130 [2024-11-18 13:30:16.086422] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.130 [2024-11-18 13:30:16.086441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:17:20.130 [2024-11-18 13:30:16.086475] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.130 [2024-11-18 13:30:16.086494] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.130 [2024-11-18 13:30:16.086512] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.130 [2024-11-18 13:30:16.086531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.130 [2024-11-18 13:30:16.086656] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.130 [2024-11-18 13:30:16.086680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.130 [2024-11-18 13:30:16.086698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.130 [2024-11-18 13:30:16.086715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.130 [2024-11-18 13:30:16.086733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.130 [2024-11-18 13:30:16.086751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.130 [2024-11-18 13:30:16.086776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.130 [2024-11-18 13:30:16.086795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:17:20.130 [2024-11-18 13:30:16.086847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.130 [2024-11-18 13:30:16.088262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.130 [2024-11-18 13:30:16.088357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.130 [2024-11-18 13:30:16.088403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:17:20.130 [2024-11-18 13:30:16.088425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.130 [2024-11-18 13:30:16.088518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.130 [2024-11-18 13:30:16.088540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.130 [2024-11-18 13:30:16.088560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:20.130 [2024-11-18 13:30:16.088579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.130 [2024-11-18 13:30:16.093691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.130 [2024-11-18 13:30:16.093796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.130 [2024-11-18 13:30:16.093843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.093854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.093926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.093935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.131 [2024-11-18 13:30:16.093943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.093950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.093987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.093996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.131 [2024-11-18 13:30:16.094003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.094010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.094026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.094037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.131 [2024-11-18 13:30:16.094044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.094051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.102505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.102541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.131 [2024-11-18 13:30:16.102551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.102560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.109404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.131 [2024-11-18 13:30:16.109413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.109421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.109470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.131 [2024-11-18 13:30:16.109478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.109485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.109525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.131 [2024-11-18 13:30:16.109534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.109541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.109615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.131 [2024-11-18 13:30:16.109623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.109633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.109670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.131 [2024-11-18 13:30:16.109678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.109687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.109730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.131 [2024-11-18 13:30:16.109737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.109744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.131 [2024-11-18 13:30:16.109796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.131 [2024-11-18 13:30:16.109806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.131 [2024-11-18 13:30:16.109814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.131 [2024-11-18 13:30:16.109944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.558 ms, result 0 00:17:20.391 00:17:20.391 00:17:20.391 13:30:16 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85382 00:17:20.391 13:30:16 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85382 00:17:20.391 13:30:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85382 ']' 00:17:20.391 13:30:16 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:20.392 13:30:16 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.392 13:30:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.392 13:30:16 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.392 13:30:16 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.392 13:30:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:20.651 [2024-11-18 13:30:16.578194] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:17:20.651 [2024-11-18 13:30:16.578465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85382 ] 00:17:20.651 [2024-11-18 13:30:16.727724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.651 [2024-11-18 13:30:16.756668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.592 13:30:17 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.592 13:30:17 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:21.592 13:30:17 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:21.592 [2024-11-18 13:30:17.661190] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.592 [2024-11-18 13:30:17.661270] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.854 [2024-11-18 13:30:17.838743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.838978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:21.854 [2024-11-18 13:30:17.839003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:21.854 [2024-11-18 13:30:17.839014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.841560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.841611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.854 [2024-11-18 13:30:17.841622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:17:21.854 [2024-11-18 13:30:17.841636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.841764] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:21.854 [2024-11-18 13:30:17.842042] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:21.854 [2024-11-18 13:30:17.842061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.842072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.854 [2024-11-18 13:30:17.842083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:21.854 [2024-11-18 13:30:17.842092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.843942] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:21.854 [2024-11-18 13:30:17.847737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.847899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:21.854 [2024-11-18 13:30:17.847967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.792 ms 00:17:21.854 [2024-11-18 13:30:17.847992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.848079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.848107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:21.854 [2024-11-18 13:30:17.848138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:21.854 [2024-11-18 13:30:17.848157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.856213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.856367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.854 [2024-11-18 13:30:17.856388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.902 ms 00:17:21.854 [2024-11-18 13:30:17.856396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.856518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.856529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.854 [2024-11-18 13:30:17.856541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:21.854 [2024-11-18 13:30:17.856552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.856582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.856591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:21.854 [2024-11-18 13:30:17.856603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:21.854 [2024-11-18 13:30:17.856611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.856636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:21.854 [2024-11-18 13:30:17.858629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.858672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.854 [2024-11-18 13:30:17.858683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:17:21.854 [2024-11-18 13:30:17.858695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.858735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.858746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:21.854 [2024-11-18 13:30:17.858754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:21.854 [2024-11-18 13:30:17.858769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.858791] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:21.854 [2024-11-18 13:30:17.858814] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:21.854 [2024-11-18 13:30:17.858858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:21.854 [2024-11-18 13:30:17.858878] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:21.854 [2024-11-18 13:30:17.858985] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:21.854 [2024-11-18 13:30:17.859005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:21.854 [2024-11-18 13:30:17.859016] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:21.854 [2024-11-18 13:30:17.859028] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:21.854 [2024-11-18 13:30:17.859037] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:21.854 [2024-11-18 13:30:17.859053] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:21.854 [2024-11-18 13:30:17.859061] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:21.854 [2024-11-18 13:30:17.859071] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:21.854 [2024-11-18 13:30:17.859085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:21.854 [2024-11-18 13:30:17.859094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.859102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:21.854 [2024-11-18 13:30:17.859112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:21.854 [2024-11-18 13:30:17.859120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.859248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.854 [2024-11-18 13:30:17.859258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:21.854 [2024-11-18 13:30:17.859268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:17:21.854 [2024-11-18 13:30:17.859276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.854 [2024-11-18 13:30:17.859384] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:21.854 [2024-11-18 13:30:17.859395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:21.854 [2024-11-18 13:30:17.859408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.854 [2024-11-18 13:30:17.859416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.854 [2024-11-18 13:30:17.859431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:21.854 [2024-11-18 13:30:17.859439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:21.854 [2024-11-18 13:30:17.859449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:21.854 [2024-11-18 13:30:17.859463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:21.854 [2024-11-18 13:30:17.859474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:21.854 [2024-11-18 13:30:17.859482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.854 [2024-11-18 13:30:17.859492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:21.854 [2024-11-18 13:30:17.859499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:21.854 [2024-11-18 13:30:17.859509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.854 [2024-11-18 13:30:17.859517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:21.854 [2024-11-18 13:30:17.859528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:21.854 [2024-11-18 13:30:17.859535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.854 [2024-11-18 13:30:17.859545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:21.854 [2024-11-18 13:30:17.859555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:21.854 [2024-11-18 13:30:17.859565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:21.855 [2024-11-18 13:30:17.859585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.855 [2024-11-18 13:30:17.859602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:21.855 [2024-11-18 13:30:17.859610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.855 [2024-11-18 13:30:17.859627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:21.855 [2024-11-18 13:30:17.859638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.855 [2024-11-18 13:30:17.859656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:21.855 [2024-11-18 13:30:17.859664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.855 [2024-11-18 13:30:17.859680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:21.855 [2024-11-18 13:30:17.859690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.855 [2024-11-18 13:30:17.859706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:21.855 [2024-11-18 13:30:17.859714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:21.855 [2024-11-18 13:30:17.859726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.855 [2024-11-18 13:30:17.859732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:21.855 [2024-11-18 13:30:17.859741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:21.855 [2024-11-18 13:30:17.859747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:21.855 [2024-11-18 13:30:17.859762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:21.855 [2024-11-18 13:30:17.859770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:21.855 [2024-11-18 13:30:17.859789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:21.855 [2024-11-18 13:30:17.859797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.855 [2024-11-18 13:30:17.859805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.855 [2024-11-18 13:30:17.859814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:21.855 [2024-11-18 13:30:17.859822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:21.855 [2024-11-18 13:30:17.859829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:21.855 [2024-11-18 13:30:17.859839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:21.855 [2024-11-18 13:30:17.859845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:21.855 [2024-11-18 13:30:17.859857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:21.855 [2024-11-18 13:30:17.859865] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:21.855 [2024-11-18 13:30:17.859880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.855 [2024-11-18 13:30:17.859889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:21.855 [2024-11-18 13:30:17.859898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:21.855 [2024-11-18 13:30:17.859905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:21.855 [2024-11-18 13:30:17.859914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:21.855 [2024-11-18 13:30:17.859921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:21.855 [2024-11-18 13:30:17.859930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:21.855 [2024-11-18 13:30:17.859937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:21.855 [2024-11-18 13:30:17.859946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:21.855 [2024-11-18 13:30:17.859954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:21.855 [2024-11-18 13:30:17.859962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:21.855 [2024-11-18 13:30:17.859969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:21.855 [2024-11-18 13:30:17.859979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:21.855 [2024-11-18 13:30:17.859987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:21.855 [2024-11-18 13:30:17.859999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:21.855 [2024-11-18 13:30:17.860006] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:21.855 [2024-11-18 13:30:17.860018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.855 [2024-11-18 13:30:17.860027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:21.855 [2024-11-18 13:30:17.860036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:21.855 [2024-11-18 13:30:17.860044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:21.855 [2024-11-18 13:30:17.860055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:21.855 [2024-11-18 13:30:17.860062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.855 [2024-11-18 13:30:17.860072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:21.855 [2024-11-18 13:30:17.860080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:17:21.855 [2024-11-18 13:30:17.860089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.855 [2024-11-18 13:30:17.873967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.855 [2024-11-18 13:30:17.874162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.855 [2024-11-18 13:30:17.874211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.814 ms 00:17:21.855 [2024-11-18 13:30:17.874222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.855 [2024-11-18 13:30:17.874360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.855 [2024-11-18 13:30:17.874376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.855 [2024-11-18 13:30:17.874385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:21.855 [2024-11-18 13:30:17.874394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.855 [2024-11-18 13:30:17.886824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.855 [2024-11-18 13:30:17.886875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.855 [2024-11-18 13:30:17.886886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.403 ms 00:17:21.855 [2024-11-18 13:30:17.886896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.855 [2024-11-18 13:30:17.886966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.855 [2024-11-18 13:30:17.886978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.855 [2024-11-18 13:30:17.886987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.856 [2024-11-18 13:30:17.886997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.887561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.887602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.856 [2024-11-18 13:30:17.887614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:17:21.856 [2024-11-18 13:30:17.887624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.887780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.887799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.856 [2024-11-18 13:30:17.887809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:21.856 [2024-11-18 13:30:17.887821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.896010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.896061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.856 [2024-11-18 13:30:17.896072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.165 ms 00:17:21.856 [2024-11-18 13:30:17.896081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.899793] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:21.856 [2024-11-18 13:30:17.899845] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:21.856 [2024-11-18 13:30:17.899858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.899869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:21.856 [2024-11-18 13:30:17.899878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:17:21.856 [2024-11-18 13:30:17.899887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.915603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.915655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:21.856 [2024-11-18 13:30:17.915668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.655 ms 00:17:21.856 [2024-11-18 13:30:17.915680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.918481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.918534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:21.856 [2024-11-18 13:30:17.918545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:17:21.856 [2024-11-18 13:30:17.918555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.921309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.921481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:21.856 [2024-11-18 13:30:17.921499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:17:21.856 [2024-11-18 13:30:17.921509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.921847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.921861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.856 [2024-11-18 13:30:17.921871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:21.856 [2024-11-18 13:30:17.921881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.954738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.856 [2024-11-18 13:30:17.954810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:21.856 [2024-11-18 13:30:17.954826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.833 ms 00:17:21.856 [2024-11-18 13:30:17.954840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.856 [2024-11-18 13:30:17.963070] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.116 [2024-11-18 13:30:17.981177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:17.981224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.116 [2024-11-18 13:30:17.981239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.215 ms 00:17:22.116 [2024-11-18 13:30:17.981248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:17.981342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:17.981358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.116 [2024-11-18 13:30:17.981377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:22.116 [2024-11-18 13:30:17.981385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:17.981444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:17.981460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.116 [2024-11-18 13:30:17.981470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:22.116 [2024-11-18 13:30:17.981478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:17.981509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:17.981517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.116 [2024-11-18 13:30:17.981530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:22.116 [2024-11-18 13:30:17.981541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:17.981576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.116 [2024-11-18 13:30:17.981586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:17.981595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.116 [2024-11-18 13:30:17.981604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:22.116 [2024-11-18 13:30:17.981613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:17.987234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:17.987281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.116 [2024-11-18 13:30:17.987297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.598 ms 00:17:22.116 [2024-11-18 13:30:17.987312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:17.987404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:17.987416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.116 [2024-11-18 13:30:17.987428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:22.116 [2024-11-18 13:30:17.987440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:17.988415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.116 [2024-11-18 13:30:17.989716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.354 ms, result 0 00:17:22.116 [2024-11-18 13:30:17.991507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.116 Some configs were skipped because the RPC state that can call them passed over. 00:17:22.116 13:30:18 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:22.116 [2024-11-18 13:30:18.229312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.116 [2024-11-18 13:30:18.229496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.116 [2024-11-18 13:30:18.229568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:17:22.116 [2024-11-18 13:30:18.229595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.116 [2024-11-18 13:30:18.229678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.334 ms, result 0 00:17:22.116 true 00:17:22.376 13:30:18 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:22.376 [2024-11-18 13:30:18.396463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.376 [2024-11-18 13:30:18.396501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.376 [2024-11-18 13:30:18.396512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:17:22.376 [2024-11-18 13:30:18.396520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.376 [2024-11-18 13:30:18.396553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.050 ms, result 0 00:17:22.376 true 00:17:22.376 13:30:18 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85382 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85382 ']' 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85382 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85382 00:17:22.376 killing process with pid 85382 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85382' 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85382 00:17:22.376 13:30:18 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85382 00:17:22.636 [2024-11-18 13:30:18.519548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.519598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.636 [2024-11-18 13:30:18.519612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.636 [2024-11-18 13:30:18.519621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.519645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:22.636 [2024-11-18 13:30:18.520076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.520093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.636 [2024-11-18 13:30:18.520104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:17:22.636 [2024-11-18 13:30:18.520113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.520402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.520415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.636 [2024-11-18 13:30:18.520424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:17:22.636 [2024-11-18 13:30:18.520434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.524898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.524930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.636 [2024-11-18 13:30:18.524940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.445 ms 00:17:22.636 [2024-11-18 13:30:18.524950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.531893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.531924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:22.636 [2024-11-18 13:30:18.531933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.907 ms 00:17:22.636 [2024-11-18 13:30:18.531943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.534078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.534112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.636 [2024-11-18 13:30:18.534121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:17:22.636 [2024-11-18 13:30:18.534129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.537911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.537946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.636 [2024-11-18 13:30:18.537955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:17:22.636 [2024-11-18 13:30:18.537970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.538091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.538102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.636 [2024-11-18 13:30:18.538110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:22.636 [2024-11-18 13:30:18.538119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.540558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.540590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:22.636 [2024-11-18 13:30:18.540599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:17:22.636 [2024-11-18 13:30:18.540610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.542813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.542845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:22.636 [2024-11-18 13:30:18.542853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:17:22.636 [2024-11-18 13:30:18.542862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.544846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.544892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.636 [2024-11-18 13:30:18.544903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.952 ms 00:17:22.636 [2024-11-18 13:30:18.544912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.546553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.636 [2024-11-18 13:30:18.546668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.636 [2024-11-18 13:30:18.546682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:17:22.636 [2024-11-18 13:30:18.546691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.636 [2024-11-18 13:30:18.546720] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.636 [2024-11-18 13:30:18.546736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.636 [2024-11-18 13:30:18.546936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.546943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.546951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.546959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.546968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.546975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.546984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.546995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.637 [2024-11-18 13:30:18.547600] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.637 [2024-11-18 13:30:18.547608] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:17:22.637 [2024-11-18 13:30:18.547621] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.637 [2024-11-18 13:30:18.547630] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.637 [2024-11-18 13:30:18.547638] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.637 [2024-11-18 13:30:18.547646] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.637 [2024-11-18 13:30:18.547654] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.637 [2024-11-18 13:30:18.547663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.637 [2024-11-18 13:30:18.547672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.637 [2024-11-18 13:30:18.547678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.637 [2024-11-18 13:30:18.547687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.637 [2024-11-18 13:30:18.547694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.637 [2024-11-18 13:30:18.547704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.637 [2024-11-18 13:30:18.547712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:17:22.637 [2024-11-18 13:30:18.547722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.637 [2024-11-18 13:30:18.549082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.637 [2024-11-18 13:30:18.549099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.638 [2024-11-18 13:30:18.549108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:17:22.638 [2024-11-18 13:30:18.549117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.549213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.638 [2024-11-18 13:30:18.549224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.638 [2024-11-18 13:30:18.549236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:22.638 [2024-11-18 13:30:18.549245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.554516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.554623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.638 [2024-11-18 13:30:18.554671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.554696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.554785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.554811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.638 [2024-11-18 13:30:18.554832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.554888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.554945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.555049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.638 [2024-11-18 13:30:18.555074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.555116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.555163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.555361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.638 [2024-11-18 13:30:18.555385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.555406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.564454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.564568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.638 [2024-11-18 13:30:18.564616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.564640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.571397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.571510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.638 [2024-11-18 13:30:18.571558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.571585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.571653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.571705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.638 [2024-11-18 13:30:18.571729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.571749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.571817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.571927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.638 [2024-11-18 13:30:18.571985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.572010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.572101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.572132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.638 [2024-11-18 13:30:18.572155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.572252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.572314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.572340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.638 [2024-11-18 13:30:18.572359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.572381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.572430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.572490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.638 [2024-11-18 13:30:18.572515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.572535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.572590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.638 [2024-11-18 13:30:18.572732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.638 [2024-11-18 13:30:18.572758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.638 [2024-11-18 13:30:18.572778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.638 [2024-11-18 13:30:18.572925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.357 ms, result 0 00:17:22.638 13:30:18 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:22.638 13:30:18 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.897 [2024-11-18 13:30:18.801731] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:17:22.897 [2024-11-18 13:30:18.801935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85419 ] 00:17:22.897 [2024-11-18 13:30:18.953700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.897 [2024-11-18 13:30:18.972963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.162 [2024-11-18 13:30:19.061916] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.162 [2024-11-18 13:30:19.062132] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.162 [2024-11-18 13:30:19.218613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.162 [2024-11-18 13:30:19.218764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.162 [2024-11-18 13:30:19.218827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.162 [2024-11-18 13:30:19.218851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.162 [2024-11-18 13:30:19.221197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.162 [2024-11-18 13:30:19.221316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.162 [2024-11-18 13:30:19.221368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:17:23.162 [2024-11-18 13:30:19.221389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.162 [2024-11-18 13:30:19.221932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.162 [2024-11-18 13:30:19.222279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.162 [2024-11-18 13:30:19.222334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.162 [2024-11-18 13:30:19.222401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.162 [2024-11-18 13:30:19.222476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:17:23.162 [2024-11-18 13:30:19.222499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.162 [2024-11-18 13:30:19.224055] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:23.163 [2024-11-18 13:30:19.226977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.163 [2024-11-18 13:30:19.227094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:23.163 [2024-11-18 13:30:19.227159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:17:23.163 [2024-11-18 13:30:19.227196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.163 [2024-11-18 13:30:19.227519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.163 [2024-11-18 13:30:19.227583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:23.163 [2024-11-18 13:30:19.227609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:23.163 [2024-11-18 13:30:19.227677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.163 [2024-11-18 13:30:19.233342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.163 [2024-11-18 13:30:19.233463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.163 [2024-11-18 13:30:19.233477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.593 ms 00:17:23.163 [2024-11-18 13:30:19.233492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.163 [2024-11-18 13:30:19.233623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.163 [2024-11-18 13:30:19.233635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.163 [2024-11-18 13:30:19.233647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:23.163 [2024-11-18 13:30:19.233654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.163 [2024-11-18 13:30:19.233682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.163 [2024-11-18 13:30:19.233695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.163 [2024-11-18 13:30:19.233703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:23.163 [2024-11-18 13:30:19.233710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.163 [2024-11-18 13:30:19.233732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:23.163 [2024-11-18 13:30:19.235424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.163 [2024-11-18 13:30:19.235452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.163 [2024-11-18 13:30:19.235462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:17:23.163 [2024-11-18 13:30:19.235469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.163 [2024-11-18 13:30:19.235510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.163 [2024-11-18 13:30:19.235518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.163 [2024-11-18 13:30:19.235527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:23.163 [2024-11-18 13:30:19.235534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.164 [2024-11-18 13:30:19.235552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:23.164 [2024-11-18 13:30:19.235570] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:23.164 [2024-11-18 13:30:19.235604] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:23.164 [2024-11-18 13:30:19.235625] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:23.164 [2024-11-18 13:30:19.235728] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:23.164 [2024-11-18 13:30:19.235742] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.164 [2024-11-18 13:30:19.235752] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:23.164 [2024-11-18 13:30:19.235762] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.164 [2024-11-18 13:30:19.235771] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.164 [2024-11-18 13:30:19.235779] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:23.164 [2024-11-18 13:30:19.235786] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.164 [2024-11-18 13:30:19.235794] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:23.164 [2024-11-18 13:30:19.235806] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:23.164 [2024-11-18 13:30:19.235816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.164 [2024-11-18 13:30:19.235823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.164 [2024-11-18 13:30:19.235831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:17:23.164 [2024-11-18 13:30:19.235838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.164 [2024-11-18 13:30:19.235925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.164 [2024-11-18 13:30:19.235933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.164 [2024-11-18 13:30:19.235943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:23.164 [2024-11-18 13:30:19.235950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.164 [2024-11-18 13:30:19.236052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.164 [2024-11-18 13:30:19.236062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.164 [2024-11-18 13:30:19.236072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.164 [2024-11-18 13:30:19.236084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.164 [2024-11-18 13:30:19.236093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.164 [2024-11-18 13:30:19.236100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.164 [2024-11-18 13:30:19.236106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:23.165 [2024-11-18 13:30:19.236117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.165 [2024-11-18 13:30:19.236125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:23.165 [2024-11-18 13:30:19.236131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.165 [2024-11-18 13:30:19.236138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.165 [2024-11-18 13:30:19.236145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:23.165 [2024-11-18 13:30:19.236150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.165 [2024-11-18 13:30:19.236157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.165 [2024-11-18 13:30:19.236175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:23.165 [2024-11-18 13:30:19.236183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.165 [2024-11-18 13:30:19.236190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.165 [2024-11-18 13:30:19.236199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:23.165 [2024-11-18 13:30:19.236206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.165 [2024-11-18 13:30:19.236213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.165 [2024-11-18 13:30:19.236219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:23.165 [2024-11-18 13:30:19.236226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.165 [2024-11-18 13:30:19.236233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.165 [2024-11-18 13:30:19.236244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:23.165 [2024-11-18 13:30:19.236250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.165 [2024-11-18 13:30:19.236257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.165 [2024-11-18 13:30:19.236264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:23.165 [2024-11-18 13:30:19.236270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.165 [2024-11-18 13:30:19.236277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.165 [2024-11-18 13:30:19.236284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:23.166 [2024-11-18 13:30:19.236291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.166 [2024-11-18 13:30:19.236305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.166 [2024-11-18 13:30:19.236313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:23.166 [2024-11-18 13:30:19.236319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.166 [2024-11-18 13:30:19.236326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.166 [2024-11-18 13:30:19.236332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:23.166 [2024-11-18 13:30:19.236338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.166 [2024-11-18 13:30:19.236345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:23.166 [2024-11-18 13:30:19.236351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:23.166 [2024-11-18 13:30:19.236359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.166 [2024-11-18 13:30:19.236366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:23.166 [2024-11-18 13:30:19.236373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:23.166 [2024-11-18 13:30:19.236379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.166 [2024-11-18 13:30:19.236385] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.166 [2024-11-18 13:30:19.236392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.166 [2024-11-18 13:30:19.236399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.166 [2024-11-18 13:30:19.236406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.166 [2024-11-18 13:30:19.236413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.166 [2024-11-18 13:30:19.236420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.166 [2024-11-18 13:30:19.236427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.166 [2024-11-18 13:30:19.236433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.166 [2024-11-18 13:30:19.236440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.166 [2024-11-18 13:30:19.236446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.167 [2024-11-18 13:30:19.236454] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.167 [2024-11-18 13:30:19.236463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.167 [2024-11-18 13:30:19.236473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:23.167 [2024-11-18 13:30:19.236481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:23.167 [2024-11-18 13:30:19.236489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:23.167 [2024-11-18 13:30:19.236495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:23.167 [2024-11-18 13:30:19.236502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:23.167 [2024-11-18 13:30:19.236509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:23.167 [2024-11-18 13:30:19.236516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:23.167 [2024-11-18 13:30:19.236522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:23.167 [2024-11-18 13:30:19.236529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:23.167 [2024-11-18 13:30:19.236536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:23.167 [2024-11-18 13:30:19.236543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:23.167 [2024-11-18 13:30:19.236550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:23.167 [2024-11-18 13:30:19.236556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:23.167 [2024-11-18 13:30:19.236563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:23.167 [2024-11-18 13:30:19.236570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.167 [2024-11-18 13:30:19.236578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.167 [2024-11-18 13:30:19.236590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.167 [2024-11-18 13:30:19.236598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.167 [2024-11-18 13:30:19.236606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.167 [2024-11-18 13:30:19.236613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.167 [2024-11-18 13:30:19.236621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.167 [2024-11-18 13:30:19.236628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.167 [2024-11-18 13:30:19.236635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:17:23.168 [2024-11-18 13:30:19.236642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.168 [2024-11-18 13:30:19.247071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.168 [2024-11-18 13:30:19.247243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.168 [2024-11-18 13:30:19.247261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.380 ms 00:17:23.168 [2024-11-18 13:30:19.247269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.168 [2024-11-18 13:30:19.247390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.168 [2024-11-18 13:30:19.247406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.168 [2024-11-18 13:30:19.247419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:23.168 [2024-11-18 13:30:19.247426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.168 [2024-11-18 13:30:19.269790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.168 [2024-11-18 13:30:19.269841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.168 [2024-11-18 13:30:19.269855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.340 ms 00:17:23.168 [2024-11-18 13:30:19.269869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.168 [2024-11-18 13:30:19.269960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.168 [2024-11-18 13:30:19.269976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.168 [2024-11-18 13:30:19.269985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.168 [2024-11-18 13:30:19.269994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.168 [2024-11-18 13:30:19.270454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.168 [2024-11-18 13:30:19.270482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.168 [2024-11-18 13:30:19.270495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:17:23.168 [2024-11-18 13:30:19.270504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.168 [2024-11-18 13:30:19.270667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.168 [2024-11-18 13:30:19.270684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.169 [2024-11-18 13:30:19.270696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:23.169 [2024-11-18 13:30:19.270706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.169 [2024-11-18 13:30:19.278015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.169 [2024-11-18 13:30:19.278056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.169 [2024-11-18 13:30:19.278074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.284 ms 00:17:23.169 [2024-11-18 13:30:19.278085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.169 [2024-11-18 13:30:19.281728] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:23.169 [2024-11-18 13:30:19.281776] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:23.169 [2024-11-18 13:30:19.281791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.169 [2024-11-18 13:30:19.281800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:23.169 [2024-11-18 13:30:19.281811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:17:23.169 [2024-11-18 13:30:19.281820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.297055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.297241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:23.440 [2024-11-18 13:30:19.297262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.173 ms 00:17:23.440 [2024-11-18 13:30:19.297271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.300001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.300044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:23.440 [2024-11-18 13:30:19.300054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:17:23.440 [2024-11-18 13:30:19.300062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.302541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.302582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:23.440 [2024-11-18 13:30:19.302591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:17:23.440 [2024-11-18 13:30:19.302598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.302934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.302948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.440 [2024-11-18 13:30:19.302956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:23.440 [2024-11-18 13:30:19.302963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.324774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.324832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.440 [2024-11-18 13:30:19.324853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.789 ms 00:17:23.440 [2024-11-18 13:30:19.324862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.332869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.440 [2024-11-18 13:30:19.351066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.351124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.440 [2024-11-18 13:30:19.351138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.114 ms 00:17:23.440 [2024-11-18 13:30:19.351162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.351273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.351285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.440 [2024-11-18 13:30:19.351295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:23.440 [2024-11-18 13:30:19.351307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.351363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.351392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.440 [2024-11-18 13:30:19.351401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:23.440 [2024-11-18 13:30:19.351409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.351435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.351449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.440 [2024-11-18 13:30:19.351458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.440 [2024-11-18 13:30:19.351466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.351503] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.440 [2024-11-18 13:30:19.351513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.351522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.440 [2024-11-18 13:30:19.351530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:23.440 [2024-11-18 13:30:19.351538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.357220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.357264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.440 [2024-11-18 13:30:19.357274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.657 ms 00:17:23.440 [2024-11-18 13:30:19.357284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.440 [2024-11-18 13:30:19.357379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.440 [2024-11-18 13:30:19.357390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.440 [2024-11-18 13:30:19.357399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:23.441 [2024-11-18 13:30:19.357408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.441 [2024-11-18 13:30:19.358368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.441 [2024-11-18 13:30:19.359713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.414 ms, result 0 00:17:23.441 [2024-11-18 13:30:19.360886] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.441 [2024-11-18 13:30:19.368315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:24.385  [2024-11-18T13:30:21.455Z] Copying: 23/256 [MB] (23 MBps) [2024-11-18T13:30:22.396Z] Copying: 38/256 [MB] (15 MBps) [2024-11-18T13:30:23.780Z] Copying: 57/256 [MB] (19 MBps) [2024-11-18T13:30:24.725Z] Copying: 78/256 [MB] (20 MBps) [2024-11-18T13:30:25.669Z] Copying: 90/256 [MB] (12 MBps) [2024-11-18T13:30:26.612Z] Copying: 102/256 [MB] (11 MBps) [2024-11-18T13:30:27.555Z] Copying: 112/256 [MB] (10 MBps) [2024-11-18T13:30:28.514Z] Copying: 122/256 [MB] (10 MBps) [2024-11-18T13:30:29.487Z] Copying: 141/256 [MB] (18 MBps) [2024-11-18T13:30:30.428Z] Copying: 163/256 [MB] (21 MBps) [2024-11-18T13:30:31.375Z] Copying: 180/256 [MB] (16 MBps) [2024-11-18T13:30:32.764Z] Copying: 195/256 [MB] (15 MBps) [2024-11-18T13:30:33.709Z] Copying: 218/256 [MB] (23 MBps) [2024-11-18T13:30:34.284Z] Copying: 239/256 [MB] (21 MBps) [2024-11-18T13:30:34.284Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-18 13:30:34.254687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.156 [2024-11-18 13:30:34.256666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.156 [2024-11-18 13:30:34.256712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.156 [2024-11-18 13:30:34.256726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.156 [2024-11-18 13:30:34.256736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.156 [2024-11-18 13:30:34.256758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.156 [2024-11-18 13:30:34.257427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.156 [2024-11-18 13:30:34.257459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.156 [2024-11-18 13:30:34.257471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:17:38.156 [2024-11-18 13:30:34.257480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.156 [2024-11-18 13:30:34.257742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.156 [2024-11-18 13:30:34.257759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.156 [2024-11-18 13:30:34.257769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:17:38.156 [2024-11-18 13:30:34.257781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.156 [2024-11-18 13:30:34.261493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.156 [2024-11-18 13:30:34.261516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.156 [2024-11-18 13:30:34.261526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.695 ms 00:17:38.156 [2024-11-18 13:30:34.261534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.157 [2024-11-18 13:30:34.268516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.157 [2024-11-18 13:30:34.268555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:38.157 [2024-11-18 13:30:34.268566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.964 ms 00:17:38.157 [2024-11-18 13:30:34.268589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.157 [2024-11-18 13:30:34.271079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.157 [2024-11-18 13:30:34.271127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.157 [2024-11-18 13:30:34.271137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.428 ms 00:17:38.157 [2024-11-18 13:30:34.271155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.157 [2024-11-18 13:30:34.275396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.157 [2024-11-18 13:30:34.275593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.157 [2024-11-18 13:30:34.275614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.169 ms 00:17:38.157 [2024-11-18 13:30:34.275622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.157 [2024-11-18 13:30:34.275752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.157 [2024-11-18 13:30:34.275761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.157 [2024-11-18 13:30:34.275770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:38.157 [2024-11-18 13:30:34.275785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.157 [2024-11-18 13:30:34.278899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.157 [2024-11-18 13:30:34.279058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:38.157 [2024-11-18 13:30:34.279075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.095 ms 00:17:38.157 [2024-11-18 13:30:34.279082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.157 [2024-11-18 13:30:34.281795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.157 [2024-11-18 13:30:34.281843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:38.157 [2024-11-18 13:30:34.281852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:17:38.157 [2024-11-18 13:30:34.281859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.419 [2024-11-18 13:30:34.284013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.419 [2024-11-18 13:30:34.284189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.419 [2024-11-18 13:30:34.284260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:17:38.419 [2024-11-18 13:30:34.284284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.419 [2024-11-18 13:30:34.285834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.419 [2024-11-18 13:30:34.285989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.419 [2024-11-18 13:30:34.286006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:17:38.419 [2024-11-18 13:30:34.286015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.419 [2024-11-18 13:30:34.286051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.419 [2024-11-18 13:30:34.286068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.419 [2024-11-18 13:30:34.286489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.420 [2024-11-18 13:30:34.286852] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.420 [2024-11-18 13:30:34.286861] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:17:38.420 [2024-11-18 13:30:34.286869] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.420 [2024-11-18 13:30:34.286877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.420 [2024-11-18 13:30:34.286885] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.420 [2024-11-18 13:30:34.286893] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.420 [2024-11-18 13:30:34.286900] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.420 [2024-11-18 13:30:34.286908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.420 [2024-11-18 13:30:34.286916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.420 [2024-11-18 13:30:34.286922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.420 [2024-11-18 13:30:34.286929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.420 [2024-11-18 13:30:34.286936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.420 [2024-11-18 13:30:34.286947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.420 [2024-11-18 13:30:34.286956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:17:38.420 [2024-11-18 13:30:34.286964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.289201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.420 [2024-11-18 13:30:34.289230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.420 [2024-11-18 13:30:34.289239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:17:38.420 [2024-11-18 13:30:34.289247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.289378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.420 [2024-11-18 13:30:34.289388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.420 [2024-11-18 13:30:34.289403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:38.420 [2024-11-18 13:30:34.289411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.297348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.297396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.420 [2024-11-18 13:30:34.297406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.297413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.297487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.297498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.420 [2024-11-18 13:30:34.297505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.297513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.297564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.297573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.420 [2024-11-18 13:30:34.297586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.297593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.297612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.297621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.420 [2024-11-18 13:30:34.297629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.297636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.310845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.310895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.420 [2024-11-18 13:30:34.310906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.310915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.320819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.320869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.420 [2024-11-18 13:30:34.320880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.320888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.320957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.320967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.420 [2024-11-18 13:30:34.320976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.320984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.321030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.321043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.420 [2024-11-18 13:30:34.321052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.321060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.321129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.321138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.420 [2024-11-18 13:30:34.321147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.321154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.321218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.321240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:38.420 [2024-11-18 13:30:34.321256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.321264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.321305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.321314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.420 [2024-11-18 13:30:34.321322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.321330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.321381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.420 [2024-11-18 13:30:34.321393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.420 [2024-11-18 13:30:34.321405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.420 [2024-11-18 13:30:34.321412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.420 [2024-11-18 13:30:34.321555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.879 ms, result 0 00:17:38.420 00:17:38.420 00:17:38.420 13:30:34 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:38.420 13:30:34 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:38.992 13:30:35 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.253 [2024-11-18 13:30:35.167060] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:17:39.253 [2024-11-18 13:30:35.167253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85595 ] 00:17:39.253 [2024-11-18 13:30:35.326592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.253 [2024-11-18 13:30:35.355552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.513 [2024-11-18 13:30:35.464555] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.513 [2024-11-18 13:30:35.464631] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.513 [2024-11-18 13:30:35.625687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.513 [2024-11-18 13:30:35.625922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:39.513 [2024-11-18 13:30:35.625947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:39.513 [2024-11-18 13:30:35.625956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.513 [2024-11-18 13:30:35.628569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.513 [2024-11-18 13:30:35.628626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.513 [2024-11-18 13:30:35.628637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:17:39.513 [2024-11-18 13:30:35.628645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.513 [2024-11-18 13:30:35.628879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:39.513 [2024-11-18 13:30:35.629217] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:39.513 [2024-11-18 13:30:35.629246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.513 [2024-11-18 13:30:35.629258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.513 [2024-11-18 13:30:35.629268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:17:39.513 [2024-11-18 13:30:35.629277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.513 [2024-11-18 13:30:35.631530] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:39.513 [2024-11-18 13:30:35.635598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.513 [2024-11-18 13:30:35.635654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:39.513 [2024-11-18 13:30:35.635678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:17:39.513 [2024-11-18 13:30:35.635686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.513 [2024-11-18 13:30:35.635777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.513 [2024-11-18 13:30:35.635792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:39.513 [2024-11-18 13:30:35.635806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:39.513 [2024-11-18 13:30:35.635814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 13:30:35.644306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 13:30:35.644350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.776 [2024-11-18 13:30:35.644366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.447 ms 00:17:39.776 [2024-11-18 13:30:35.644374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 13:30:35.644522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 13:30:35.644535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.776 [2024-11-18 13:30:35.644545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:39.776 [2024-11-18 13:30:35.644553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 13:30:35.644586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 13:30:35.644595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:39.776 [2024-11-18 13:30:35.644608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:39.776 [2024-11-18 13:30:35.644621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 13:30:35.644643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:39.776 [2024-11-18 13:30:35.646689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 13:30:35.646725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.776 [2024-11-18 13:30:35.646736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:17:39.776 [2024-11-18 13:30:35.646744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 13:30:35.646795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 13:30:35.646804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:39.776 [2024-11-18 13:30:35.646813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:39.776 [2024-11-18 13:30:35.646821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 13:30:35.646840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:39.776 [2024-11-18 13:30:35.646860] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:39.776 [2024-11-18 13:30:35.646896] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:39.776 [2024-11-18 13:30:35.646915] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:39.777 [2024-11-18 13:30:35.647021] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:39.777 [2024-11-18 13:30:35.647032] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:39.777 [2024-11-18 13:30:35.647043] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:39.777 [2024-11-18 13:30:35.647054] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647064] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647072] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:39.777 [2024-11-18 13:30:35.647080] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:39.777 [2024-11-18 13:30:35.647088] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:39.777 [2024-11-18 13:30:35.647098] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:39.777 [2024-11-18 13:30:35.647108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.777 [2024-11-18 13:30:35.647116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:39.777 [2024-11-18 13:30:35.647124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:39.777 [2024-11-18 13:30:35.647132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 13:30:35.647255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.777 [2024-11-18 13:30:35.647270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:39.777 [2024-11-18 13:30:35.647278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:39.777 [2024-11-18 13:30:35.647284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 13:30:35.647399] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:39.777 [2024-11-18 13:30:35.647412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:39.777 [2024-11-18 13:30:35.647425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:39.777 [2024-11-18 13:30:35.647457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:39.777 [2024-11-18 13:30:35.647483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.777 [2024-11-18 13:30:35.647498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:39.777 [2024-11-18 13:30:35.647506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:39.777 [2024-11-18 13:30:35.647514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.777 [2024-11-18 13:30:35.647522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:39.777 [2024-11-18 13:30:35.647530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:39.777 [2024-11-18 13:30:35.647537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:39.777 [2024-11-18 13:30:35.647554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:39.777 [2024-11-18 13:30:35.647578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:39.777 [2024-11-18 13:30:35.647607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:39.777 [2024-11-18 13:30:35.647630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:39.777 [2024-11-18 13:30:35.647653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:39.777 [2024-11-18 13:30:35.647676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.777 [2024-11-18 13:30:35.647692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:39.777 [2024-11-18 13:30:35.647700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:39.777 [2024-11-18 13:30:35.647708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.777 [2024-11-18 13:30:35.647716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:39.777 [2024-11-18 13:30:35.647724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:39.777 [2024-11-18 13:30:35.647734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:39.777 [2024-11-18 13:30:35.647749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:39.777 [2024-11-18 13:30:35.647756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647764] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:39.777 [2024-11-18 13:30:35.647772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:39.777 [2024-11-18 13:30:35.647781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.777 [2024-11-18 13:30:35.647795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:39.777 [2024-11-18 13:30:35.647803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:39.777 [2024-11-18 13:30:35.647810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:39.777 [2024-11-18 13:30:35.647818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:39.777 [2024-11-18 13:30:35.647824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:39.777 [2024-11-18 13:30:35.647831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:39.777 [2024-11-18 13:30:35.647840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:39.777 [2024-11-18 13:30:35.647849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.777 [2024-11-18 13:30:35.647860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:39.777 [2024-11-18 13:30:35.647868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:39.777 [2024-11-18 13:30:35.647875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:39.777 [2024-11-18 13:30:35.647882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:39.777 [2024-11-18 13:30:35.647889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:39.777 [2024-11-18 13:30:35.647898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:39.777 [2024-11-18 13:30:35.647905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:39.777 [2024-11-18 13:30:35.647914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:39.777 [2024-11-18 13:30:35.647921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:39.777 [2024-11-18 13:30:35.647929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:39.777 [2024-11-18 13:30:35.647936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:39.777 [2024-11-18 13:30:35.647943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:39.777 [2024-11-18 13:30:35.647950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:39.777 [2024-11-18 13:30:35.647957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:39.778 [2024-11-18 13:30:35.647964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:39.778 [2024-11-18 13:30:35.647972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.778 [2024-11-18 13:30:35.647987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:39.778 [2024-11-18 13:30:35.647995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:39.778 [2024-11-18 13:30:35.648001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:39.778 [2024-11-18 13:30:35.648008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:39.778 [2024-11-18 13:30:35.648015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.648022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:39.778 [2024-11-18 13:30:35.648030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:17:39.778 [2024-11-18 13:30:35.648037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.661326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.661367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.778 [2024-11-18 13:30:35.661378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.238 ms 00:17:39.778 [2024-11-18 13:30:35.661386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.661513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.661531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:39.778 [2024-11-18 13:30:35.661545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:39.778 [2024-11-18 13:30:35.661552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.685041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.685378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.778 [2024-11-18 13:30:35.685420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.462 ms 00:17:39.778 [2024-11-18 13:30:35.685439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.685610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.685641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.778 [2024-11-18 13:30:35.685661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:39.778 [2024-11-18 13:30:35.685688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.686286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.686343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.778 [2024-11-18 13:30:35.686367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:39.778 [2024-11-18 13:30:35.686383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.686666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.686693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.778 [2024-11-18 13:30:35.686715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:39.778 [2024-11-18 13:30:35.686732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.695904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.695951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.778 [2024-11-18 13:30:35.695973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.129 ms 00:17:39.778 [2024-11-18 13:30:35.695981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.699781] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:39.778 [2024-11-18 13:30:35.699958] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:39.778 [2024-11-18 13:30:35.699982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.699991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:39.778 [2024-11-18 13:30:35.700000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:17:39.778 [2024-11-18 13:30:35.700007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.715869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.715915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:39.778 [2024-11-18 13:30:35.715938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.791 ms 00:17:39.778 [2024-11-18 13:30:35.715947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.718874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.719038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:39.778 [2024-11-18 13:30:35.719056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:17:39.778 [2024-11-18 13:30:35.719065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.721715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.721762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:39.778 [2024-11-18 13:30:35.721772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:17:39.778 [2024-11-18 13:30:35.721780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.722130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.722142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:39.778 [2024-11-18 13:30:35.722151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:39.778 [2024-11-18 13:30:35.722159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.744311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.744525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:39.778 [2024-11-18 13:30:35.744602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.974 ms 00:17:39.778 [2024-11-18 13:30:35.744627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.752807] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:39.778 [2024-11-18 13:30:35.771732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.771924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:39.778 [2024-11-18 13:30:35.771944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.999 ms 00:17:39.778 [2024-11-18 13:30:35.771963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.772057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.772072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:39.778 [2024-11-18 13:30:35.772083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:39.778 [2024-11-18 13:30:35.772095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.772152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.772196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:39.778 [2024-11-18 13:30:35.772206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:39.778 [2024-11-18 13:30:35.772214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.772240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.772250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:39.778 [2024-11-18 13:30:35.772259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:39.778 [2024-11-18 13:30:35.772267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.772302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:39.778 [2024-11-18 13:30:35.772313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.772320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:39.778 [2024-11-18 13:30:35.772329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:39.778 [2024-11-18 13:30:35.772337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.777324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.777471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:39.778 [2024-11-18 13:30:35.777489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.963 ms 00:17:39.778 [2024-11-18 13:30:35.777498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.777588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.778 [2024-11-18 13:30:35.777598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:39.778 [2024-11-18 13:30:35.777614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:39.778 [2024-11-18 13:30:35.777625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 13:30:35.778557] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:39.778 [2024-11-18 13:30:35.779843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.574 ms, result 0 00:17:39.778 [2024-11-18 13:30:35.780776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.778 [2024-11-18 13:30:35.788525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.353  [2024-11-18T13:30:36.481Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-18 13:30:36.179718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.353 [2024-11-18 13:30:36.181127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.181197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:40.353 [2024-11-18 13:30:36.181210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:40.353 [2024-11-18 13:30:36.181219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.181241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:40.353 [2024-11-18 13:30:36.181906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.181939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:40.353 [2024-11-18 13:30:36.181952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:17:40.353 [2024-11-18 13:30:36.181961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.184030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.184085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:40.353 [2024-11-18 13:30:36.184107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:17:40.353 [2024-11-18 13:30:36.184120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.188582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.188618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:40.353 [2024-11-18 13:30:36.188630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.445 ms 00:17:40.353 [2024-11-18 13:30:36.188637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.195630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.195669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:40.353 [2024-11-18 13:30:36.195680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.959 ms 00:17:40.353 [2024-11-18 13:30:36.195703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.198725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.198924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:40.353 [2024-11-18 13:30:36.198943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.958 ms 00:17:40.353 [2024-11-18 13:30:36.198950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.203625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.203694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:40.353 [2024-11-18 13:30:36.203706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.578 ms 00:17:40.353 [2024-11-18 13:30:36.203720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.203834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.203843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:40.353 [2024-11-18 13:30:36.203851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:40.353 [2024-11-18 13:30:36.203862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.207201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.207246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:40.353 [2024-11-18 13:30:36.207255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.319 ms 00:17:40.353 [2024-11-18 13:30:36.207263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.353 [2024-11-18 13:30:36.210515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.353 [2024-11-18 13:30:36.210567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:40.353 [2024-11-18 13:30:36.210577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:17:40.354 [2024-11-18 13:30:36.210583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.354 [2024-11-18 13:30:36.213000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.354 [2024-11-18 13:30:36.213052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:40.354 [2024-11-18 13:30:36.213062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.368 ms 00:17:40.354 [2024-11-18 13:30:36.213069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.354 [2024-11-18 13:30:36.215432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.354 [2024-11-18 13:30:36.215482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:40.354 [2024-11-18 13:30:36.215492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:17:40.354 [2024-11-18 13:30:36.215499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.354 [2024-11-18 13:30:36.215545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:40.354 [2024-11-18 13:30:36.215560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.215996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:40.354 [2024-11-18 13:30:36.216195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:40.355 [2024-11-18 13:30:36.216366] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:40.355 [2024-11-18 13:30:36.216375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:17:40.355 [2024-11-18 13:30:36.216384] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:40.355 [2024-11-18 13:30:36.216391] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:40.355 [2024-11-18 13:30:36.216399] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:40.355 [2024-11-18 13:30:36.216407] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:40.355 [2024-11-18 13:30:36.216414] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:40.355 [2024-11-18 13:30:36.216423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:40.355 [2024-11-18 13:30:36.216436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:40.355 [2024-11-18 13:30:36.216443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:40.355 [2024-11-18 13:30:36.216450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:40.355 [2024-11-18 13:30:36.216458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.355 [2024-11-18 13:30:36.216465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:40.355 [2024-11-18 13:30:36.216482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:17:40.355 [2024-11-18 13:30:36.216490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.218824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.355 [2024-11-18 13:30:36.218857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:40.355 [2024-11-18 13:30:36.218869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.299 ms 00:17:40.355 [2024-11-18 13:30:36.218876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.219029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.355 [2024-11-18 13:30:36.219039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:40.355 [2024-11-18 13:30:36.219048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:40.355 [2024-11-18 13:30:36.219056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.227501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.227695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.355 [2024-11-18 13:30:36.227715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.227730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.227813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.227827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.355 [2024-11-18 13:30:36.227835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.227842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.227891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.227901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.355 [2024-11-18 13:30:36.227908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.227915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.227939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.227951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.355 [2024-11-18 13:30:36.227959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.227966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.241864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.241914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.355 [2024-11-18 13:30:36.241924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.241932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.252152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.355 [2024-11-18 13:30:36.252162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.252193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.252247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.355 [2024-11-18 13:30:36.252255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.252264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.252309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.355 [2024-11-18 13:30:36.252317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.252326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.252405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.355 [2024-11-18 13:30:36.252414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.252421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.252460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:40.355 [2024-11-18 13:30:36.252472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.252480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.252530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.355 [2024-11-18 13:30:36.252538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.252546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.355 [2024-11-18 13:30:36.252604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.355 [2024-11-18 13:30:36.252611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.355 [2024-11-18 13:30:36.252619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.355 [2024-11-18 13:30:36.252763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.611 ms, result 0 00:17:40.355 00:17:40.355 00:17:40.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.355 13:30:36 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85614 00:17:40.355 13:30:36 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85614 00:17:40.355 13:30:36 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:40.355 13:30:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85614 ']' 00:17:40.355 13:30:36 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.355 13:30:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.355 13:30:36 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.355 13:30:36 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.355 13:30:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:40.617 [2024-11-18 13:30:36.549491] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:17:40.617 [2024-11-18 13:30:36.549642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85614 ] 00:17:40.617 [2024-11-18 13:30:36.709295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.617 [2024-11-18 13:30:36.738857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.559 13:30:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.559 13:30:37 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:41.559 13:30:37 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:41.559 [2024-11-18 13:30:37.590285] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.559 [2024-11-18 13:30:37.590362] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.821 [2024-11-18 13:30:37.759918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.759985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.821 [2024-11-18 13:30:37.760000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.821 [2024-11-18 13:30:37.760011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.762605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.762659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.821 [2024-11-18 13:30:37.762670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:17:41.821 [2024-11-18 13:30:37.762680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.762803] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.821 [2024-11-18 13:30:37.763079] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.821 [2024-11-18 13:30:37.763095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.763106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.821 [2024-11-18 13:30:37.763116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:41.821 [2024-11-18 13:30:37.763126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.764945] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:41.821 [2024-11-18 13:30:37.768743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.768795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:41.821 [2024-11-18 13:30:37.768814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 00:17:41.821 [2024-11-18 13:30:37.768822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.768905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.768916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:41.821 [2024-11-18 13:30:37.768929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:41.821 [2024-11-18 13:30:37.768937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.777077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.777122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.821 [2024-11-18 13:30:37.777136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.081 ms 00:17:41.821 [2024-11-18 13:30:37.777144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.777286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.777300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.821 [2024-11-18 13:30:37.777312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:41.821 [2024-11-18 13:30:37.777322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.777349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.777361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.821 [2024-11-18 13:30:37.777374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:41.821 [2024-11-18 13:30:37.777382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.777407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:41.821 [2024-11-18 13:30:37.779487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.779681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.821 [2024-11-18 13:30:37.779699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:17:41.821 [2024-11-18 13:30:37.779711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.779755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.779766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.821 [2024-11-18 13:30:37.779774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:41.821 [2024-11-18 13:30:37.779789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.779813] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:41.821 [2024-11-18 13:30:37.779835] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:41.821 [2024-11-18 13:30:37.779880] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:41.821 [2024-11-18 13:30:37.779907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:41.821 [2024-11-18 13:30:37.780014] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.821 [2024-11-18 13:30:37.780028] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.821 [2024-11-18 13:30:37.780038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:41.821 [2024-11-18 13:30:37.780050] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780059] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780071] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:41.821 [2024-11-18 13:30:37.780079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.821 [2024-11-18 13:30:37.780088] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.821 [2024-11-18 13:30:37.780098] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.821 [2024-11-18 13:30:37.780111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.780119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.821 [2024-11-18 13:30:37.780130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:41.821 [2024-11-18 13:30:37.780137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.780250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.821 [2024-11-18 13:30:37.780262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.821 [2024-11-18 13:30:37.780273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:41.821 [2024-11-18 13:30:37.780281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.821 [2024-11-18 13:30:37.780389] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.821 [2024-11-18 13:30:37.780401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.821 [2024-11-18 13:30:37.780414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.821 [2024-11-18 13:30:37.780446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.821 [2024-11-18 13:30:37.780481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.821 [2024-11-18 13:30:37.780499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.821 [2024-11-18 13:30:37.780506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:41.821 [2024-11-18 13:30:37.780516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.821 [2024-11-18 13:30:37.780524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.821 [2024-11-18 13:30:37.780534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:41.821 [2024-11-18 13:30:37.780541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.821 [2024-11-18 13:30:37.780558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.821 [2024-11-18 13:30:37.780589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.821 [2024-11-18 13:30:37.780616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.821 [2024-11-18 13:30:37.780643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.821 [2024-11-18 13:30:37.780670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:41.821 [2024-11-18 13:30:37.780681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.821 [2024-11-18 13:30:37.780688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.822 [2024-11-18 13:30:37.780696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:41.822 [2024-11-18 13:30:37.780703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.822 [2024-11-18 13:30:37.780711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.822 [2024-11-18 13:30:37.780718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:41.822 [2024-11-18 13:30:37.780728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.822 [2024-11-18 13:30:37.780735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.822 [2024-11-18 13:30:37.780744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:41.822 [2024-11-18 13:30:37.780750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.822 [2024-11-18 13:30:37.780760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.822 [2024-11-18 13:30:37.780767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:41.822 [2024-11-18 13:30:37.780774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.822 [2024-11-18 13:30:37.780781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.822 [2024-11-18 13:30:37.780790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.822 [2024-11-18 13:30:37.780799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.822 [2024-11-18 13:30:37.780808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.822 [2024-11-18 13:30:37.780816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.822 [2024-11-18 13:30:37.780824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.822 [2024-11-18 13:30:37.780832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.822 [2024-11-18 13:30:37.780840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.822 [2024-11-18 13:30:37.780848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.822 [2024-11-18 13:30:37.780859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.822 [2024-11-18 13:30:37.780867] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.822 [2024-11-18 13:30:37.780881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.822 [2024-11-18 13:30:37.780890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:41.822 [2024-11-18 13:30:37.780900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:41.822 [2024-11-18 13:30:37.780908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:41.822 [2024-11-18 13:30:37.780918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:41.822 [2024-11-18 13:30:37.780926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:41.822 [2024-11-18 13:30:37.780935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:41.822 [2024-11-18 13:30:37.780943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:41.822 [2024-11-18 13:30:37.780952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:41.822 [2024-11-18 13:30:37.780960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:41.822 [2024-11-18 13:30:37.780971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:41.822 [2024-11-18 13:30:37.780978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:41.822 [2024-11-18 13:30:37.780988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:41.822 [2024-11-18 13:30:37.780995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:41.822 [2024-11-18 13:30:37.781010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:41.822 [2024-11-18 13:30:37.781017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.822 [2024-11-18 13:30:37.781030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.822 [2024-11-18 13:30:37.781038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.822 [2024-11-18 13:30:37.781047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.822 [2024-11-18 13:30:37.781054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.822 [2024-11-18 13:30:37.781064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.822 [2024-11-18 13:30:37.781071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.781081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.822 [2024-11-18 13:30:37.781089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:17:41.822 [2024-11-18 13:30:37.781097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.795398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.795450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.822 [2024-11-18 13:30:37.795463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.240 ms 00:17:41.822 [2024-11-18 13:30:37.795473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.795605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.795622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.822 [2024-11-18 13:30:37.795630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:41.822 [2024-11-18 13:30:37.795640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.808510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.808562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.822 [2024-11-18 13:30:37.808573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.848 ms 00:17:41.822 [2024-11-18 13:30:37.808582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.808652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.808664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.822 [2024-11-18 13:30:37.808674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.822 [2024-11-18 13:30:37.808684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.809250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.809282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.822 [2024-11-18 13:30:37.809293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:17:41.822 [2024-11-18 13:30:37.809304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.809457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.809479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.822 [2024-11-18 13:30:37.809490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:41.822 [2024-11-18 13:30:37.809501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.817983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.818038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.822 [2024-11-18 13:30:37.818048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.457 ms 00:17:41.822 [2024-11-18 13:30:37.818057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.822015] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:41.822 [2024-11-18 13:30:37.822219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.822 [2024-11-18 13:30:37.822238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.822248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.822 [2024-11-18 13:30:37.822257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:17:41.822 [2024-11-18 13:30:37.822266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.838213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.838269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.822 [2024-11-18 13:30:37.838283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.786 ms 00:17:41.822 [2024-11-18 13:30:37.838296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.841307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.841480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.822 [2024-11-18 13:30:37.841498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:17:41.822 [2024-11-18 13:30:37.841508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.844230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.844280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.822 [2024-11-18 13:30:37.844290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:17:41.822 [2024-11-18 13:30:37.844299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.844643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.844656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.822 [2024-11-18 13:30:37.844665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:41.822 [2024-11-18 13:30:37.844675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.822 [2024-11-18 13:30:37.879293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.822 [2024-11-18 13:30:37.879527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.822 [2024-11-18 13:30:37.879551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.595 ms 00:17:41.822 [2024-11-18 13:30:37.879566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.887828] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.823 [2024-11-18 13:30:37.907368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.823 [2024-11-18 13:30:37.907417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.823 [2024-11-18 13:30:37.907435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.711 ms 00:17:41.823 [2024-11-18 13:30:37.907444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.907545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.823 [2024-11-18 13:30:37.907557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.823 [2024-11-18 13:30:37.907572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:41.823 [2024-11-18 13:30:37.907580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.907639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.823 [2024-11-18 13:30:37.907658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.823 [2024-11-18 13:30:37.907669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:41.823 [2024-11-18 13:30:37.907677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.907710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.823 [2024-11-18 13:30:37.907719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.823 [2024-11-18 13:30:37.907733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:41.823 [2024-11-18 13:30:37.907743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.907787] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.823 [2024-11-18 13:30:37.907798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.823 [2024-11-18 13:30:37.907808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.823 [2024-11-18 13:30:37.907819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:41.823 [2024-11-18 13:30:37.907828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.914076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.823 [2024-11-18 13:30:37.914141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.823 [2024-11-18 13:30:37.914152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:17:41.823 [2024-11-18 13:30:37.914186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.914282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.823 [2024-11-18 13:30:37.914299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.823 [2024-11-18 13:30:37.914309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:41.823 [2024-11-18 13:30:37.914319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.823 [2024-11-18 13:30:37.915359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.823 [2024-11-18 13:30:37.916783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.104 ms, result 0 00:17:41.823 [2024-11-18 13:30:37.919020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.823 Some configs were skipped because the RPC state that can call them passed over. 00:17:42.084 13:30:37 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:42.084 [2024-11-18 13:30:38.156404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.084 [2024-11-18 13:30:38.156468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:42.084 [2024-11-18 13:30:38.156491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.984 ms 00:17:42.084 [2024-11-18 13:30:38.156501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.084 [2024-11-18 13:30:38.156541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.138 ms, result 0 00:17:42.084 true 00:17:42.084 13:30:38 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:42.344 [2024-11-18 13:30:38.372551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.344 [2024-11-18 13:30:38.372615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:42.344 [2024-11-18 13:30:38.372628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:17:42.344 [2024-11-18 13:30:38.372638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.344 [2024-11-18 13:30:38.372675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.053 ms, result 0 00:17:42.344 true 00:17:42.344 13:30:38 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85614 00:17:42.344 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85614 ']' 00:17:42.344 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85614 00:17:42.344 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:42.344 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.344 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85614 00:17:42.344 killing process with pid 85614 00:17:42.345 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.345 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.345 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85614' 00:17:42.345 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85614 00:17:42.345 13:30:38 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85614 00:17:42.606 [2024-11-18 13:30:38.555574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.555631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:42.606 [2024-11-18 13:30:38.555647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.606 [2024-11-18 13:30:38.555656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.555683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:42.606 [2024-11-18 13:30:38.556306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.556335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:42.606 [2024-11-18 13:30:38.556346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:17:42.606 [2024-11-18 13:30:38.556357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.556662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.556676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:42.606 [2024-11-18 13:30:38.556685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:42.606 [2024-11-18 13:30:38.556696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.561300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.561340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.606 [2024-11-18 13:30:38.561351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:17:42.606 [2024-11-18 13:30:38.561364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.568403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.568458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.606 [2024-11-18 13:30:38.568469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.997 ms 00:17:42.606 [2024-11-18 13:30:38.568486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.571052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.571283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.606 [2024-11-18 13:30:38.571301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:17:42.606 [2024-11-18 13:30:38.571310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.574920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.575072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.606 [2024-11-18 13:30:38.575091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.481 ms 00:17:42.606 [2024-11-18 13:30:38.575104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.575271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.575285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.606 [2024-11-18 13:30:38.575294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:17:42.606 [2024-11-18 13:30:38.575304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.577644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.577690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.606 [2024-11-18 13:30:38.577700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.321 ms 00:17:42.606 [2024-11-18 13:30:38.577714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.579681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.579731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.606 [2024-11-18 13:30:38.579741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.923 ms 00:17:42.606 [2024-11-18 13:30:38.579751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.581039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.581085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.606 [2024-11-18 13:30:38.581095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:17:42.606 [2024-11-18 13:30:38.581103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.582444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.606 [2024-11-18 13:30:38.582493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.606 [2024-11-18 13:30:38.582503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:17:42.606 [2024-11-18 13:30:38.582512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.606 [2024-11-18 13:30:38.582550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.606 [2024-11-18 13:30:38.582568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.582994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.606 [2024-11-18 13:30:38.583219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.607 [2024-11-18 13:30:38.583482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.607 [2024-11-18 13:30:38.583491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:17:42.607 [2024-11-18 13:30:38.583502] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.607 [2024-11-18 13:30:38.583518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.607 [2024-11-18 13:30:38.583528] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.607 [2024-11-18 13:30:38.583535] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.607 [2024-11-18 13:30:38.583545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.607 [2024-11-18 13:30:38.583561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.607 [2024-11-18 13:30:38.583571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.607 [2024-11-18 13:30:38.583578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.607 [2024-11-18 13:30:38.583587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.607 [2024-11-18 13:30:38.583594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.607 [2024-11-18 13:30:38.583604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.607 [2024-11-18 13:30:38.583613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:17:42.607 [2024-11-18 13:30:38.583625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.585458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.607 [2024-11-18 13:30:38.585487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.607 [2024-11-18 13:30:38.585498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:17:42.607 [2024-11-18 13:30:38.585509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.585625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.607 [2024-11-18 13:30:38.585637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.607 [2024-11-18 13:30:38.585648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:42.607 [2024-11-18 13:30:38.585659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.592582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.592738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.607 [2024-11-18 13:30:38.592755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.592765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.592855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.592867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.607 [2024-11-18 13:30:38.592876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.592888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.592933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.592945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.607 [2024-11-18 13:30:38.592953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.592968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.592986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.592996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.607 [2024-11-18 13:30:38.593004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.593014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.605509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.605558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.607 [2024-11-18 13:30:38.605573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.605583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.614954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.615007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.607 [2024-11-18 13:30:38.615018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.615031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.615079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.615093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.607 [2024-11-18 13:30:38.615101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.615112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.615384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.615424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.607 [2024-11-18 13:30:38.615447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.615469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.615572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.615599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.607 [2024-11-18 13:30:38.615622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.615711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.615789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.615804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.607 [2024-11-18 13:30:38.615813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.615824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.615867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.615878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.607 [2024-11-18 13:30:38.615889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.615898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.615943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.607 [2024-11-18 13:30:38.615955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.607 [2024-11-18 13:30:38.615964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.607 [2024-11-18 13:30:38.615973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.607 [2024-11-18 13:30:38.616116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.520 ms, result 0 00:17:42.867 13:30:38 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:42.867 [2024-11-18 13:30:38.893747] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:17:42.867 [2024-11-18 13:30:38.893904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85650 ] 00:17:43.128 [2024-11-18 13:30:39.057351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.128 [2024-11-18 13:30:39.086582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.128 [2024-11-18 13:30:39.203840] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.128 [2024-11-18 13:30:39.203923] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.391 [2024-11-18 13:30:39.365113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.365202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:43.391 [2024-11-18 13:30:39.365218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.391 [2024-11-18 13:30:39.365234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.367783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.367836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.391 [2024-11-18 13:30:39.367848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:17:43.391 [2024-11-18 13:30:39.367859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.367962] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:43.391 [2024-11-18 13:30:39.368241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:43.391 [2024-11-18 13:30:39.368257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.368268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.391 [2024-11-18 13:30:39.368278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:43.391 [2024-11-18 13:30:39.368286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.369997] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:43.391 [2024-11-18 13:30:39.373872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.373931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:43.391 [2024-11-18 13:30:39.373948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:17:43.391 [2024-11-18 13:30:39.373956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.374052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.374063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:43.391 [2024-11-18 13:30:39.374074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:43.391 [2024-11-18 13:30:39.374082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.382228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.382270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.391 [2024-11-18 13:30:39.382289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.102 ms 00:17:43.391 [2024-11-18 13:30:39.382298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.382437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.382449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.391 [2024-11-18 13:30:39.382459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:43.391 [2024-11-18 13:30:39.382470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.382498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.382507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:43.391 [2024-11-18 13:30:39.382515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:43.391 [2024-11-18 13:30:39.382523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.382544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:43.391 [2024-11-18 13:30:39.384634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.384819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.391 [2024-11-18 13:30:39.384847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:17:43.391 [2024-11-18 13:30:39.384856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.384909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.384918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:43.391 [2024-11-18 13:30:39.384926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:43.391 [2024-11-18 13:30:39.384938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.384957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:43.391 [2024-11-18 13:30:39.384978] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:43.391 [2024-11-18 13:30:39.385014] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:43.391 [2024-11-18 13:30:39.385036] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:43.391 [2024-11-18 13:30:39.385142] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:43.391 [2024-11-18 13:30:39.385153] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:43.391 [2024-11-18 13:30:39.385186] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:43.391 [2024-11-18 13:30:39.385197] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:43.391 [2024-11-18 13:30:39.385207] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:43.391 [2024-11-18 13:30:39.385223] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:43.391 [2024-11-18 13:30:39.385231] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:43.391 [2024-11-18 13:30:39.385238] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:43.391 [2024-11-18 13:30:39.385248] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:43.391 [2024-11-18 13:30:39.385259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.385267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:43.391 [2024-11-18 13:30:39.385275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:43.391 [2024-11-18 13:30:39.385282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.385371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.391 [2024-11-18 13:30:39.385381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:43.391 [2024-11-18 13:30:39.385390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:43.391 [2024-11-18 13:30:39.385402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.391 [2024-11-18 13:30:39.385509] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:43.391 [2024-11-18 13:30:39.385520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:43.391 [2024-11-18 13:30:39.385540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.391 [2024-11-18 13:30:39.385558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.391 [2024-11-18 13:30:39.385568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:43.391 [2024-11-18 13:30:39.385577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:43.391 [2024-11-18 13:30:39.385585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:43.391 [2024-11-18 13:30:39.385595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:43.392 [2024-11-18 13:30:39.385603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.392 [2024-11-18 13:30:39.385619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:43.392 [2024-11-18 13:30:39.385626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:43.392 [2024-11-18 13:30:39.385634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.392 [2024-11-18 13:30:39.385642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:43.392 [2024-11-18 13:30:39.385651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:43.392 [2024-11-18 13:30:39.385658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:43.392 [2024-11-18 13:30:39.385676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:43.392 [2024-11-18 13:30:39.385684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:43.392 [2024-11-18 13:30:39.385701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.392 [2024-11-18 13:30:39.385716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:43.392 [2024-11-18 13:30:39.385728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.392 [2024-11-18 13:30:39.385742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:43.392 [2024-11-18 13:30:39.385749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.392 [2024-11-18 13:30:39.385762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:43.392 [2024-11-18 13:30:39.385769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.392 [2024-11-18 13:30:39.385782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:43.392 [2024-11-18 13:30:39.385788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.392 [2024-11-18 13:30:39.385801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:43.392 [2024-11-18 13:30:39.385808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:43.392 [2024-11-18 13:30:39.385814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.392 [2024-11-18 13:30:39.385821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:43.392 [2024-11-18 13:30:39.385828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:43.392 [2024-11-18 13:30:39.385837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:43.392 [2024-11-18 13:30:39.385851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:43.392 [2024-11-18 13:30:39.385857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385863] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:43.392 [2024-11-18 13:30:39.385872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:43.392 [2024-11-18 13:30:39.385879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.392 [2024-11-18 13:30:39.385891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.392 [2024-11-18 13:30:39.385899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:43.392 [2024-11-18 13:30:39.385906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:43.392 [2024-11-18 13:30:39.385916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:43.392 [2024-11-18 13:30:39.385924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:43.392 [2024-11-18 13:30:39.385930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:43.392 [2024-11-18 13:30:39.385937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:43.392 [2024-11-18 13:30:39.385946] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:43.392 [2024-11-18 13:30:39.385956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.392 [2024-11-18 13:30:39.385966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:43.392 [2024-11-18 13:30:39.385973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:43.392 [2024-11-18 13:30:39.385981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:43.392 [2024-11-18 13:30:39.385989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:43.392 [2024-11-18 13:30:39.385996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:43.392 [2024-11-18 13:30:39.386002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:43.392 [2024-11-18 13:30:39.386010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:43.392 [2024-11-18 13:30:39.386016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:43.392 [2024-11-18 13:30:39.386023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:43.392 [2024-11-18 13:30:39.386030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:43.392 [2024-11-18 13:30:39.386037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:43.392 [2024-11-18 13:30:39.386044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:43.392 [2024-11-18 13:30:39.386050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:43.392 [2024-11-18 13:30:39.386057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:43.392 [2024-11-18 13:30:39.386065] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:43.392 [2024-11-18 13:30:39.386073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.392 [2024-11-18 13:30:39.386085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:43.392 [2024-11-18 13:30:39.386093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:43.392 [2024-11-18 13:30:39.386100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:43.392 [2024-11-18 13:30:39.386108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:43.392 [2024-11-18 13:30:39.386115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.392 [2024-11-18 13:30:39.386123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:43.392 [2024-11-18 13:30:39.386131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:17:43.392 [2024-11-18 13:30:39.386141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.392 [2024-11-18 13:30:39.400423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.392 [2024-11-18 13:30:39.400470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.392 [2024-11-18 13:30:39.400483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.213 ms 00:17:43.392 [2024-11-18 13:30:39.400493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.392 [2024-11-18 13:30:39.400627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.392 [2024-11-18 13:30:39.400645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:43.392 [2024-11-18 13:30:39.400654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:43.393 [2024-11-18 13:30:39.400662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.420202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.420254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.393 [2024-11-18 13:30:39.420268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.515 ms 00:17:43.393 [2024-11-18 13:30:39.420276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.420372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.420389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.393 [2024-11-18 13:30:39.420398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.393 [2024-11-18 13:30:39.420406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.420913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.420965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.393 [2024-11-18 13:30:39.420977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:17:43.393 [2024-11-18 13:30:39.420986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.421148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.421159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.393 [2024-11-18 13:30:39.421205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:43.393 [2024-11-18 13:30:39.421214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.429761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.429810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.393 [2024-11-18 13:30:39.429827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.522 ms 00:17:43.393 [2024-11-18 13:30:39.429835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.433825] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:43.393 [2024-11-18 13:30:39.433878] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:43.393 [2024-11-18 13:30:39.433891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.433900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:43.393 [2024-11-18 13:30:39.433909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.952 ms 00:17:43.393 [2024-11-18 13:30:39.433917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.449941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.450140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:43.393 [2024-11-18 13:30:39.450161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.940 ms 00:17:43.393 [2024-11-18 13:30:39.450190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.453232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.453275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:43.393 [2024-11-18 13:30:39.453285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:17:43.393 [2024-11-18 13:30:39.453292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.455764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.455812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:43.393 [2024-11-18 13:30:39.455821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.406 ms 00:17:43.393 [2024-11-18 13:30:39.455828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.456208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.456226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:43.393 [2024-11-18 13:30:39.456236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:43.393 [2024-11-18 13:30:39.456244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.479910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.480131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:43.393 [2024-11-18 13:30:39.480162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.640 ms 00:17:43.393 [2024-11-18 13:30:39.480206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.488757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:43.393 [2024-11-18 13:30:39.508686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.508736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:43.393 [2024-11-18 13:30:39.508750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.295 ms 00:17:43.393 [2024-11-18 13:30:39.508767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.508863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.508874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:43.393 [2024-11-18 13:30:39.508888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:43.393 [2024-11-18 13:30:39.508900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.508958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.508969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:43.393 [2024-11-18 13:30:39.508977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:43.393 [2024-11-18 13:30:39.508985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.509013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.509022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:43.393 [2024-11-18 13:30:39.509031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.393 [2024-11-18 13:30:39.509045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.393 [2024-11-18 13:30:39.509090] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:43.393 [2024-11-18 13:30:39.509101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.393 [2024-11-18 13:30:39.509109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:43.393 [2024-11-18 13:30:39.509117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:43.393 [2024-11-18 13:30:39.509125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.655 [2024-11-18 13:30:39.515358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.655 [2024-11-18 13:30:39.515409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:43.655 [2024-11-18 13:30:39.515421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.210 ms 00:17:43.655 [2024-11-18 13:30:39.515437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.655 [2024-11-18 13:30:39.515532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.655 [2024-11-18 13:30:39.515544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:43.655 [2024-11-18 13:30:39.515553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:43.655 [2024-11-18 13:30:39.515562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.655 [2024-11-18 13:30:39.516634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:43.655 [2024-11-18 13:30:39.518034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.192 ms, result 0 00:17:43.655 [2024-11-18 13:30:39.519127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.655 [2024-11-18 13:30:39.526707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.599  [2024-11-18T13:30:41.668Z] Copying: 21/256 [MB] (21 MBps) [2024-11-18T13:30:42.612Z] Copying: 39/256 [MB] (17 MBps) [2024-11-18T13:30:43.994Z] Copying: 52/256 [MB] (13 MBps) [2024-11-18T13:30:44.596Z] Copying: 86/256 [MB] (33 MBps) [2024-11-18T13:30:45.984Z] Copying: 104/256 [MB] (17 MBps) [2024-11-18T13:30:46.928Z] Copying: 118/256 [MB] (13 MBps) [2024-11-18T13:30:47.865Z] Copying: 136/256 [MB] (18 MBps) [2024-11-18T13:30:48.807Z] Copying: 162/256 [MB] (25 MBps) [2024-11-18T13:30:49.746Z] Copying: 182/256 [MB] (20 MBps) [2024-11-18T13:30:50.689Z] Copying: 214/256 [MB] (31 MBps) [2024-11-18T13:30:51.633Z] Copying: 233/256 [MB] (18 MBps) [2024-11-18T13:30:52.211Z] Copying: 249/256 [MB] (16 MBps) [2024-11-18T13:30:52.211Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-18 13:30:51.941431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.083 [2024-11-18 13:30:51.942658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.083 [2024-11-18 13:30:51.942803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:56.083 [2024-11-18 13:30:51.942822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.083 [2024-11-18 13:30:51.942831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.083 [2024-11-18 13:30:51.942855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:56.083 [2024-11-18 13:30:51.943313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.943337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:56.084 [2024-11-18 13:30:51.943346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:17:56.084 [2024-11-18 13:30:51.943353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.943599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.943609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:56.084 [2024-11-18 13:30:51.943621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:17:56.084 [2024-11-18 13:30:51.943633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.947553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.947576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:56.084 [2024-11-18 13:30:51.947586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:17:56.084 [2024-11-18 13:30:51.947594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.954623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.954648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:56.084 [2024-11-18 13:30:51.954658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.012 ms 00:17:56.084 [2024-11-18 13:30:51.954678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.956486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.956518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:56.084 [2024-11-18 13:30:51.956528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:17:56.084 [2024-11-18 13:30:51.956535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.959803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.959835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.084 [2024-11-18 13:30:51.959844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:17:56.084 [2024-11-18 13:30:51.959851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.959967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.959977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.084 [2024-11-18 13:30:51.959990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:56.084 [2024-11-18 13:30:51.960001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.961690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.961814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:56.084 [2024-11-18 13:30:51.961829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:17:56.084 [2024-11-18 13:30:51.961836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.962995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.963021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:56.084 [2024-11-18 13:30:51.963030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:17:56.084 [2024-11-18 13:30:51.963036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.964071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.964102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.084 [2024-11-18 13:30:51.964111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:17:56.084 [2024-11-18 13:30:51.964118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.965120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.084 [2024-11-18 13:30:51.965240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.084 [2024-11-18 13:30:51.965254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:17:56.084 [2024-11-18 13:30:51.965260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.084 [2024-11-18 13:30:51.965288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.084 [2024-11-18 13:30:51.965302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.084 [2024-11-18 13:30:51.965654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.965998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.966005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.966019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.966026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.085 [2024-11-18 13:30:51.966042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.085 [2024-11-18 13:30:51.966050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e85e72f-8360-491c-ab6b-b1556b7ece1e 00:17:56.085 [2024-11-18 13:30:51.966057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.085 [2024-11-18 13:30:51.966064] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.085 [2024-11-18 13:30:51.966077] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.085 [2024-11-18 13:30:51.966085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.085 [2024-11-18 13:30:51.966095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.085 [2024-11-18 13:30:51.966103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.085 [2024-11-18 13:30:51.966112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.085 [2024-11-18 13:30:51.966119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.085 [2024-11-18 13:30:51.966125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.085 [2024-11-18 13:30:51.966132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.085 [2024-11-18 13:30:51.966139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.085 [2024-11-18 13:30:51.966147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:17:56.085 [2024-11-18 13:30:51.966154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.967513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.085 [2024-11-18 13:30:51.967534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.085 [2024-11-18 13:30:51.967543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:17:56.085 [2024-11-18 13:30:51.967554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.967630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.085 [2024-11-18 13:30:51.967638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.085 [2024-11-18 13:30:51.967646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:56.085 [2024-11-18 13:30:51.967657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.972655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.085 [2024-11-18 13:30:51.972765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.085 [2024-11-18 13:30:51.972779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.085 [2024-11-18 13:30:51.972791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.972844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.085 [2024-11-18 13:30:51.972853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.085 [2024-11-18 13:30:51.972860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.085 [2024-11-18 13:30:51.972867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.972907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.085 [2024-11-18 13:30:51.972916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.085 [2024-11-18 13:30:51.972923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.085 [2024-11-18 13:30:51.972930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.972949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.085 [2024-11-18 13:30:51.972956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.085 [2024-11-18 13:30:51.972963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.085 [2024-11-18 13:30:51.972973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.981288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.085 [2024-11-18 13:30:51.981326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.085 [2024-11-18 13:30:51.981335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.085 [2024-11-18 13:30:51.981345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.987906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.085 [2024-11-18 13:30:51.988048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.085 [2024-11-18 13:30:51.988062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.085 [2024-11-18 13:30:51.988070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.085 [2024-11-18 13:30:51.988129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.086 [2024-11-18 13:30:51.988138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.086 [2024-11-18 13:30:51.988146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.086 [2024-11-18 13:30:51.988153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.086 [2024-11-18 13:30:51.988196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.086 [2024-11-18 13:30:51.988207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.086 [2024-11-18 13:30:51.988215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.086 [2024-11-18 13:30:51.988222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.086 [2024-11-18 13:30:51.988284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.086 [2024-11-18 13:30:51.988293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.086 [2024-11-18 13:30:51.988301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.086 [2024-11-18 13:30:51.988308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.086 [2024-11-18 13:30:51.988336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.086 [2024-11-18 13:30:51.988347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.086 [2024-11-18 13:30:51.988355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.086 [2024-11-18 13:30:51.988362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.086 [2024-11-18 13:30:51.988398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.086 [2024-11-18 13:30:51.988409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.086 [2024-11-18 13:30:51.988417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.086 [2024-11-18 13:30:51.988424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.086 [2024-11-18 13:30:51.988463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.086 [2024-11-18 13:30:51.988474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.086 [2024-11-18 13:30:51.988482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.086 [2024-11-18 13:30:51.988489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.086 [2024-11-18 13:30:51.988611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.930 ms, result 0 00:17:56.086 00:17:56.086 00:17:56.086 13:30:52 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:56.657 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:56.657 13:30:52 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:56.657 13:30:52 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:56.657 13:30:52 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:56.657 13:30:52 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:56.657 13:30:52 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:56.657 13:30:52 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:56.657 Process with pid 85614 is not found 00:17:56.657 13:30:52 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85614 00:17:56.657 13:30:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85614 ']' 00:17:56.657 13:30:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85614 00:17:56.657 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85614) - No such process 00:17:56.657 13:30:52 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 85614 is not found' 00:17:56.657 ************************************ 00:17:56.657 END TEST ftl_trim 00:17:56.657 ************************************ 00:17:56.657 00:17:56.657 real 1m2.794s 00:17:56.657 user 1m25.155s 00:17:56.657 sys 0m5.240s 00:17:56.657 13:30:52 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.657 13:30:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:56.917 13:30:52 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:56.917 13:30:52 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:56.917 13:30:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.917 13:30:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:56.917 ************************************ 00:17:56.917 START TEST ftl_restore 00:17:56.917 ************************************ 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:56.917 * Looking for test storage... 00:17:56.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.917 13:30:52 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:56.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.917 --rc genhtml_branch_coverage=1 00:17:56.917 --rc genhtml_function_coverage=1 00:17:56.917 --rc genhtml_legend=1 00:17:56.917 --rc geninfo_all_blocks=1 00:17:56.917 --rc geninfo_unexecuted_blocks=1 00:17:56.917 00:17:56.917 ' 00:17:56.917 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:56.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.917 --rc genhtml_branch_coverage=1 00:17:56.917 --rc genhtml_function_coverage=1 00:17:56.917 --rc genhtml_legend=1 00:17:56.918 --rc geninfo_all_blocks=1 00:17:56.918 --rc geninfo_unexecuted_blocks=1 00:17:56.918 00:17:56.918 ' 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:56.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.918 --rc genhtml_branch_coverage=1 00:17:56.918 --rc genhtml_function_coverage=1 00:17:56.918 --rc genhtml_legend=1 00:17:56.918 --rc geninfo_all_blocks=1 00:17:56.918 --rc geninfo_unexecuted_blocks=1 00:17:56.918 00:17:56.918 ' 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:56.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.918 --rc genhtml_branch_coverage=1 00:17:56.918 --rc genhtml_function_coverage=1 00:17:56.918 --rc genhtml_legend=1 00:17:56.918 --rc geninfo_all_blocks=1 00:17:56.918 --rc geninfo_unexecuted_blocks=1 00:17:56.918 00:17:56.918 ' 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.UV3qGa2iYC 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85861 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85861 00:17:56.918 13:30:52 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 85861 ']' 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.918 13:30:52 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:57.180 [2024-11-18 13:30:53.048511] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:17:57.180 [2024-11-18 13:30:53.048731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85861 ] 00:17:57.180 [2024-11-18 13:30:53.207195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.180 [2024-11-18 13:30:53.226249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.117 13:30:53 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.117 13:30:53 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:17:58.117 13:30:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:58.117 13:30:53 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:58.117 13:30:53 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:58.117 13:30:53 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:58.117 13:30:53 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:58.117 13:30:53 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:58.117 13:30:54 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:58.117 13:30:54 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:58.117 13:30:54 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:58.117 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:58.117 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:58.117 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:58.117 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:58.117 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:58.375 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:58.375 { 00:17:58.375 "name": "nvme0n1", 00:17:58.375 "aliases": [ 00:17:58.375 "713dec6e-1c55-429f-bc7c-d749f654b6e7" 00:17:58.375 ], 00:17:58.375 "product_name": "NVMe disk", 00:17:58.375 "block_size": 4096, 00:17:58.375 "num_blocks": 1310720, 00:17:58.375 "uuid": "713dec6e-1c55-429f-bc7c-d749f654b6e7", 00:17:58.375 "numa_id": -1, 00:17:58.375 "assigned_rate_limits": { 00:17:58.375 "rw_ios_per_sec": 0, 00:17:58.375 "rw_mbytes_per_sec": 0, 00:17:58.375 "r_mbytes_per_sec": 0, 00:17:58.375 "w_mbytes_per_sec": 0 00:17:58.375 }, 00:17:58.375 "claimed": true, 00:17:58.375 "claim_type": "read_many_write_one", 00:17:58.375 "zoned": false, 00:17:58.375 "supported_io_types": { 00:17:58.375 "read": true, 00:17:58.375 "write": true, 00:17:58.375 "unmap": true, 00:17:58.375 "flush": true, 00:17:58.375 "reset": true, 00:17:58.375 "nvme_admin": true, 00:17:58.375 "nvme_io": true, 00:17:58.375 "nvme_io_md": false, 00:17:58.375 "write_zeroes": true, 00:17:58.375 "zcopy": false, 00:17:58.375 "get_zone_info": false, 00:17:58.375 "zone_management": false, 00:17:58.375 "zone_append": false, 00:17:58.375 "compare": true, 00:17:58.375 "compare_and_write": false, 00:17:58.375 "abort": true, 00:17:58.375 "seek_hole": false, 00:17:58.375 "seek_data": false, 00:17:58.375 "copy": true, 00:17:58.375 "nvme_iov_md": false 00:17:58.375 }, 00:17:58.375 "driver_specific": { 00:17:58.375 "nvme": [ 00:17:58.375 { 00:17:58.375 "pci_address": "0000:00:11.0", 00:17:58.375 "trid": { 00:17:58.375 "trtype": "PCIe", 00:17:58.375 "traddr": "0000:00:11.0" 00:17:58.375 }, 00:17:58.375 "ctrlr_data": { 00:17:58.375 "cntlid": 0, 00:17:58.375 "vendor_id": "0x1b36", 00:17:58.375 "model_number": "QEMU NVMe Ctrl", 00:17:58.375 "serial_number": "12341", 00:17:58.375 "firmware_revision": "8.0.0", 00:17:58.375 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:58.375 "oacs": { 00:17:58.375 "security": 0, 00:17:58.375 "format": 1, 00:17:58.375 "firmware": 0, 00:17:58.375 "ns_manage": 1 00:17:58.375 }, 00:17:58.375 "multi_ctrlr": false, 00:17:58.375 "ana_reporting": false 00:17:58.375 }, 00:17:58.375 "vs": { 00:17:58.375 "nvme_version": "1.4" 00:17:58.375 }, 00:17:58.375 "ns_data": { 00:17:58.375 "id": 1, 00:17:58.375 "can_share": false 00:17:58.375 } 00:17:58.375 } 00:17:58.375 ], 00:17:58.375 "mp_policy": "active_passive" 00:17:58.375 } 00:17:58.375 } 00:17:58.375 ]' 00:17:58.375 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:58.375 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:58.375 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:58.375 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:58.375 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:58.375 13:30:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:17:58.375 13:30:54 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:58.376 13:30:54 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:58.376 13:30:54 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:58.376 13:30:54 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:58.376 13:30:54 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:58.634 13:30:54 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=8f5d471d-1909-4e47-9532-580e2a600ded 00:17:58.634 13:30:54 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:58.634 13:30:54 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f5d471d-1909-4e47-9532-580e2a600ded 00:17:58.892 13:30:54 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=fdb9914e-c82d-45cf-9e15-39ac06515d4f 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fdb9914e-c82d-45cf-9e15-39ac06515d4f 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:59.150 13:30:55 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.150 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.150 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:59.150 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:59.150 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:59.150 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.409 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:59.409 { 00:17:59.409 "name": "89600fc6-c5b7-4c6b-9d83-a439c892e635", 00:17:59.409 "aliases": [ 00:17:59.409 "lvs/nvme0n1p0" 00:17:59.409 ], 00:17:59.409 "product_name": "Logical Volume", 00:17:59.409 "block_size": 4096, 00:17:59.409 "num_blocks": 26476544, 00:17:59.409 "uuid": "89600fc6-c5b7-4c6b-9d83-a439c892e635", 00:17:59.409 "assigned_rate_limits": { 00:17:59.409 "rw_ios_per_sec": 0, 00:17:59.409 "rw_mbytes_per_sec": 0, 00:17:59.409 "r_mbytes_per_sec": 0, 00:17:59.409 "w_mbytes_per_sec": 0 00:17:59.409 }, 00:17:59.409 "claimed": false, 00:17:59.409 "zoned": false, 00:17:59.409 "supported_io_types": { 00:17:59.409 "read": true, 00:17:59.409 "write": true, 00:17:59.409 "unmap": true, 00:17:59.409 "flush": false, 00:17:59.409 "reset": true, 00:17:59.409 "nvme_admin": false, 00:17:59.409 "nvme_io": false, 00:17:59.409 "nvme_io_md": false, 00:17:59.409 "write_zeroes": true, 00:17:59.409 "zcopy": false, 00:17:59.409 "get_zone_info": false, 00:17:59.409 "zone_management": false, 00:17:59.409 "zone_append": false, 00:17:59.409 "compare": false, 00:17:59.409 "compare_and_write": false, 00:17:59.409 "abort": false, 00:17:59.409 "seek_hole": true, 00:17:59.409 "seek_data": true, 00:17:59.409 "copy": false, 00:17:59.409 "nvme_iov_md": false 00:17:59.409 }, 00:17:59.409 "driver_specific": { 00:17:59.409 "lvol": { 00:17:59.409 "lvol_store_uuid": "fdb9914e-c82d-45cf-9e15-39ac06515d4f", 00:17:59.409 "base_bdev": "nvme0n1", 00:17:59.410 "thin_provision": true, 00:17:59.410 "num_allocated_clusters": 0, 00:17:59.410 "snapshot": false, 00:17:59.410 "clone": false, 00:17:59.410 "esnap_clone": false 00:17:59.410 } 00:17:59.410 } 00:17:59.410 } 00:17:59.410 ]' 00:17:59.410 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:59.410 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:59.410 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:59.410 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:59.410 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:59.410 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:59.410 13:30:55 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:59.410 13:30:55 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:59.410 13:30:55 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:59.669 13:30:55 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:59.669 13:30:55 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:59.669 13:30:55 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.669 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.669 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:59.669 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:59.669 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:59.669 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89600fc6-c5b7-4c6b-9d83-a439c892e635 00:17:59.927 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:59.927 { 00:17:59.927 "name": "89600fc6-c5b7-4c6b-9d83-a439c892e635", 00:17:59.927 "aliases": [ 00:17:59.927 "lvs/nvme0n1p0" 00:17:59.927 ], 00:17:59.927 "product_name": "Logical Volume", 00:17:59.927 "block_size": 4096, 00:17:59.927 "num_blocks": 26476544, 00:17:59.927 "uuid": "89600fc6-c5b7-4c6b-9d83-a439c892e635", 00:17:59.927 "assigned_rate_limits": { 00:17:59.927 "rw_ios_per_sec": 0, 00:17:59.927 "rw_mbytes_per_sec": 0, 00:17:59.927 "r_mbytes_per_sec": 0, 00:17:59.927 "w_mbytes_per_sec": 0 00:17:59.927 }, 00:17:59.927 "claimed": false, 00:17:59.927 "zoned": false, 00:17:59.927 "supported_io_types": { 00:17:59.927 "read": true, 00:17:59.927 "write": true, 00:17:59.927 "unmap": true, 00:17:59.927 "flush": false, 00:17:59.927 "reset": true, 00:17:59.927 "nvme_admin": false, 00:17:59.927 "nvme_io": false, 00:17:59.927 "nvme_io_md": false, 00:17:59.927 "write_zeroes": true, 00:17:59.927 "zcopy": false, 00:17:59.927 "get_zone_info": false, 00:17:59.927 "zone_management": false, 00:17:59.927 "zone_append": false, 00:17:59.927 "compare": false, 00:17:59.927 "compare_and_write": false, 00:17:59.927 "abort": false, 00:17:59.927 "seek_hole": true, 00:17:59.927 "seek_data": true, 00:17:59.927 "copy": false, 00:17:59.927 "nvme_iov_md": false 00:17:59.927 }, 00:17:59.927 "driver_specific": { 00:17:59.927 "lvol": { 00:17:59.927 "lvol_store_uuid": "fdb9914e-c82d-45cf-9e15-39ac06515d4f", 00:17:59.927 "base_bdev": "nvme0n1", 00:17:59.927 "thin_provision": true, 00:17:59.927 "num_allocated_clusters": 0, 00:17:59.927 "snapshot": false, 00:17:59.927 "clone": false, 00:17:59.927 "esnap_clone": false 00:17:59.927 } 00:17:59.927 } 00:17:59.927 } 00:17:59.927 ]' 00:17:59.927 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:59.927 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:59.927 13:30:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:59.927 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:59.927 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:59.927 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:59.927 13:30:56 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:59.927 13:30:56 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:00.187 13:30:56 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:00.187 13:30:56 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 89600fc6-c5b7-4c6b-9d83-a439c892e635 00:18:00.187 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=89600fc6-c5b7-4c6b-9d83-a439c892e635 00:18:00.187 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:00.187 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:00.187 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:00.187 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89600fc6-c5b7-4c6b-9d83-a439c892e635 00:18:00.446 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:00.446 { 00:18:00.446 "name": "89600fc6-c5b7-4c6b-9d83-a439c892e635", 00:18:00.446 "aliases": [ 00:18:00.446 "lvs/nvme0n1p0" 00:18:00.446 ], 00:18:00.446 "product_name": "Logical Volume", 00:18:00.446 "block_size": 4096, 00:18:00.446 "num_blocks": 26476544, 00:18:00.446 "uuid": "89600fc6-c5b7-4c6b-9d83-a439c892e635", 00:18:00.446 "assigned_rate_limits": { 00:18:00.446 "rw_ios_per_sec": 0, 00:18:00.446 "rw_mbytes_per_sec": 0, 00:18:00.446 "r_mbytes_per_sec": 0, 00:18:00.446 "w_mbytes_per_sec": 0 00:18:00.446 }, 00:18:00.446 "claimed": false, 00:18:00.446 "zoned": false, 00:18:00.446 "supported_io_types": { 00:18:00.446 "read": true, 00:18:00.446 "write": true, 00:18:00.446 "unmap": true, 00:18:00.446 "flush": false, 00:18:00.446 "reset": true, 00:18:00.446 "nvme_admin": false, 00:18:00.446 "nvme_io": false, 00:18:00.446 "nvme_io_md": false, 00:18:00.446 "write_zeroes": true, 00:18:00.446 "zcopy": false, 00:18:00.446 "get_zone_info": false, 00:18:00.446 "zone_management": false, 00:18:00.446 "zone_append": false, 00:18:00.446 "compare": false, 00:18:00.446 "compare_and_write": false, 00:18:00.446 "abort": false, 00:18:00.446 "seek_hole": true, 00:18:00.446 "seek_data": true, 00:18:00.446 "copy": false, 00:18:00.446 "nvme_iov_md": false 00:18:00.446 }, 00:18:00.446 "driver_specific": { 00:18:00.446 "lvol": { 00:18:00.446 "lvol_store_uuid": "fdb9914e-c82d-45cf-9e15-39ac06515d4f", 00:18:00.446 "base_bdev": "nvme0n1", 00:18:00.446 "thin_provision": true, 00:18:00.446 "num_allocated_clusters": 0, 00:18:00.446 "snapshot": false, 00:18:00.446 "clone": false, 00:18:00.446 "esnap_clone": false 00:18:00.446 } 00:18:00.446 } 00:18:00.446 } 00:18:00.446 ]' 00:18:00.446 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:00.446 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:00.446 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:00.446 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:00.446 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:00.446 13:30:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:00.446 13:30:56 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:00.446 13:30:56 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 89600fc6-c5b7-4c6b-9d83-a439c892e635 --l2p_dram_limit 10' 00:18:00.446 13:30:56 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:00.446 13:30:56 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:00.446 13:30:56 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:00.446 13:30:56 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:00.446 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:00.446 13:30:56 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 89600fc6-c5b7-4c6b-9d83-a439c892e635 --l2p_dram_limit 10 -c nvc0n1p0 00:18:00.705 [2024-11-18 13:30:56.678918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.678961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.705 [2024-11-18 13:30:56.678973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:00.705 [2024-11-18 13:30:56.678981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.679022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.679033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.705 [2024-11-18 13:30:56.679041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:00.705 [2024-11-18 13:30:56.679050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.679067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.705 [2024-11-18 13:30:56.679321] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.705 [2024-11-18 13:30:56.679335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.679342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.705 [2024-11-18 13:30:56.679349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:00.705 [2024-11-18 13:30:56.679355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.679380] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3cc8cf29-3966-4e6d-a4c5-0d674196dd1d 00:18:00.705 [2024-11-18 13:30:56.680347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.680369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:00.705 [2024-11-18 13:30:56.680381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:00.705 [2024-11-18 13:30:56.680387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.685108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.685135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.705 [2024-11-18 13:30:56.685146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:18:00.705 [2024-11-18 13:30:56.685152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.685222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.685232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.705 [2024-11-18 13:30:56.685239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:00.705 [2024-11-18 13:30:56.685245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.685288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.685298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.705 [2024-11-18 13:30:56.685306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:00.705 [2024-11-18 13:30:56.685311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.685329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.705 [2024-11-18 13:30:56.686582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.686692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.705 [2024-11-18 13:30:56.686703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:18:00.705 [2024-11-18 13:30:56.686711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.686739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.686746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.705 [2024-11-18 13:30:56.686756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:00.705 [2024-11-18 13:30:56.686764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.686777] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:00.705 [2024-11-18 13:30:56.686889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.705 [2024-11-18 13:30:56.686898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.705 [2024-11-18 13:30:56.686912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:00.705 [2024-11-18 13:30:56.686919] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.705 [2024-11-18 13:30:56.686930] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.705 [2024-11-18 13:30:56.686937] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:00.705 [2024-11-18 13:30:56.686948] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.705 [2024-11-18 13:30:56.686953] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.705 [2024-11-18 13:30:56.686960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.705 [2024-11-18 13:30:56.686965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.686973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.705 [2024-11-18 13:30:56.686979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:18:00.705 [2024-11-18 13:30:56.686985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.705 [2024-11-18 13:30:56.687048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.705 [2024-11-18 13:30:56.687058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.705 [2024-11-18 13:30:56.687063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:00.705 [2024-11-18 13:30:56.687070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.706 [2024-11-18 13:30:56.687152] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.706 [2024-11-18 13:30:56.687160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.706 [2024-11-18 13:30:56.687178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.706 [2024-11-18 13:30:56.687198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.706 [2024-11-18 13:30:56.687217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.706 [2024-11-18 13:30:56.687228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.706 [2024-11-18 13:30:56.687234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:00.706 [2024-11-18 13:30:56.687240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.706 [2024-11-18 13:30:56.687253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.706 [2024-11-18 13:30:56.687259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:00.706 [2024-11-18 13:30:56.687267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.706 [2024-11-18 13:30:56.687279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.706 [2024-11-18 13:30:56.687296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.706 [2024-11-18 13:30:56.687314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.706 [2024-11-18 13:30:56.687330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.706 [2024-11-18 13:30:56.687349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.706 [2024-11-18 13:30:56.687366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.706 [2024-11-18 13:30:56.687378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.706 [2024-11-18 13:30:56.687384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:00.706 [2024-11-18 13:30:56.687388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.706 [2024-11-18 13:30:56.687395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.706 [2024-11-18 13:30:56.687400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:00.706 [2024-11-18 13:30:56.687406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.706 [2024-11-18 13:30:56.687417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:00.706 [2024-11-18 13:30:56.687422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687428] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.706 [2024-11-18 13:30:56.687433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.706 [2024-11-18 13:30:56.687442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.706 [2024-11-18 13:30:56.687455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.706 [2024-11-18 13:30:56.687460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.706 [2024-11-18 13:30:56.687466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.706 [2024-11-18 13:30:56.687472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.706 [2024-11-18 13:30:56.687478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.706 [2024-11-18 13:30:56.687483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.706 [2024-11-18 13:30:56.687492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.706 [2024-11-18 13:30:56.687500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.706 [2024-11-18 13:30:56.687508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:00.706 [2024-11-18 13:30:56.687514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:00.706 [2024-11-18 13:30:56.687522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:00.706 [2024-11-18 13:30:56.687528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:00.706 [2024-11-18 13:30:56.687535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:00.706 [2024-11-18 13:30:56.687540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:00.706 [2024-11-18 13:30:56.687548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:00.706 [2024-11-18 13:30:56.687553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:00.706 [2024-11-18 13:30:56.687560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:00.706 [2024-11-18 13:30:56.687565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:00.706 [2024-11-18 13:30:56.687572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:00.706 [2024-11-18 13:30:56.687578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:00.706 [2024-11-18 13:30:56.687584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:00.706 [2024-11-18 13:30:56.687590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:00.706 [2024-11-18 13:30:56.687596] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.706 [2024-11-18 13:30:56.687602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.706 [2024-11-18 13:30:56.687609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.706 [2024-11-18 13:30:56.687615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.706 [2024-11-18 13:30:56.687622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.706 [2024-11-18 13:30:56.687627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.706 [2024-11-18 13:30:56.687634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.706 [2024-11-18 13:30:56.687639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.706 [2024-11-18 13:30:56.687648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:18:00.706 [2024-11-18 13:30:56.687653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.706 [2024-11-18 13:30:56.687686] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:00.706 [2024-11-18 13:30:56.687693] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:03.258 [2024-11-18 13:30:58.996049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.258 [2024-11-18 13:30:58.996116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:03.258 [2024-11-18 13:30:58.996132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2308.345 ms 00:18:03.258 [2024-11-18 13:30:58.996141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.258 [2024-11-18 13:30:59.004455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.258 [2024-11-18 13:30:59.004495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.258 [2024-11-18 13:30:59.004509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.211 ms 00:18:03.258 [2024-11-18 13:30:59.004516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.258 [2024-11-18 13:30:59.004613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.258 [2024-11-18 13:30:59.004622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:03.258 [2024-11-18 13:30:59.004637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:03.259 [2024-11-18 13:30:59.004645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.259 [2024-11-18 13:30:59.013015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.259 [2024-11-18 13:30:59.013050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.259 [2024-11-18 13:30:59.013062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.325 ms 00:18:03.259 [2024-11-18 13:30:59.013069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.259 [2024-11-18 13:30:59.013098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.259 [2024-11-18 13:30:59.013107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.259 [2024-11-18 13:30:59.013118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:03.259 [2024-11-18 13:30:59.013125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.259 [2024-11-18 13:30:59.013461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.259 [2024-11-18 13:30:59.013481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.259 [2024-11-18 13:30:59.013492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:18:03.259 [2024-11-18 13:30:59.013499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.259 [2024-11-18 13:30:59.013604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.259 [2024-11-18 13:30:59.013614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.259 [2024-11-18 13:30:59.013623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:03.259 [2024-11-18 13:30:59.013631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.259 [2024-11-18 13:30:59.018768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.259 [2024-11-18 13:30:59.018797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.259 [2024-11-18 13:30:59.018808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.116 ms 00:18:03.259 [2024-11-18 13:30:59.018816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.259 [2024-11-18 13:30:59.026942] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:03.259 [2024-11-18 13:30:59.029533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.259 [2024-11-18 13:30:59.029564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:03.259 [2024-11-18 13:30:59.029575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.653 ms 00:18:03.259 [2024-11-18 13:30:59.029584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.260 [2024-11-18 13:30:59.083404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.260 [2024-11-18 13:30:59.083451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:03.260 [2024-11-18 13:30:59.083467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.797 ms 00:18:03.260 [2024-11-18 13:30:59.083479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.260 [2024-11-18 13:30:59.083655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.260 [2024-11-18 13:30:59.083668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:03.260 [2024-11-18 13:30:59.083677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:03.260 [2024-11-18 13:30:59.083686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.260 [2024-11-18 13:30:59.086572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.260 [2024-11-18 13:30:59.086732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:03.260 [2024-11-18 13:30:59.086748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:18:03.260 [2024-11-18 13:30:59.086806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.260 [2024-11-18 13:30:59.089140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.260 [2024-11-18 13:30:59.089187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:03.260 [2024-11-18 13:30:59.089197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:18:03.260 [2024-11-18 13:30:59.089206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.260 [2024-11-18 13:30:59.089494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.260 [2024-11-18 13:30:59.089509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:03.260 [2024-11-18 13:30:59.089518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:03.260 [2024-11-18 13:30:59.089528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.260 [2024-11-18 13:30:59.113631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.261 [2024-11-18 13:30:59.113668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:03.261 [2024-11-18 13:30:59.113679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.085 ms 00:18:03.261 [2024-11-18 13:30:59.113691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.261 [2024-11-18 13:30:59.117150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.261 [2024-11-18 13:30:59.117309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:03.261 [2024-11-18 13:30:59.117325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:18:03.261 [2024-11-18 13:30:59.117339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.261 [2024-11-18 13:30:59.119987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.261 [2024-11-18 13:30:59.120018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:03.261 [2024-11-18 13:30:59.120026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:18:03.261 [2024-11-18 13:30:59.120035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.261 [2024-11-18 13:30:59.123340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.262 [2024-11-18 13:30:59.123375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:03.262 [2024-11-18 13:30:59.123385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:18:03.262 [2024-11-18 13:30:59.123395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.262 [2024-11-18 13:30:59.123432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.262 [2024-11-18 13:30:59.123443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:03.262 [2024-11-18 13:30:59.123455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:03.262 [2024-11-18 13:30:59.123472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.262 [2024-11-18 13:30:59.123533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.262 [2024-11-18 13:30:59.123543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:03.262 [2024-11-18 13:30:59.123551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:03.262 [2024-11-18 13:30:59.123560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.262 [2024-11-18 13:30:59.124394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL start{ 00:18:03.262 "name": "ftl0", 00:18:03.262 "uuid": "3cc8cf29-3966-4e6d-a4c5-0d674196dd1d" 00:18:03.262 } 00:18:03.262 up', duration = 2445.095 ms, result 0 00:18:03.262 13:30:59 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:03.262 13:30:59 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:03.262 13:30:59 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:03.262 13:30:59 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:03.526 [2024-11-18 13:30:59.533272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.533313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:03.526 [2024-11-18 13:30:59.533327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:03.526 [2024-11-18 13:30:59.533337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.533362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:03.526 [2024-11-18 13:30:59.533798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.533825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:03.526 [2024-11-18 13:30:59.533834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:03.526 [2024-11-18 13:30:59.533843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.534091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.534102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:03.526 [2024-11-18 13:30:59.534111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:18:03.526 [2024-11-18 13:30:59.534126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.537378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.537402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:03.526 [2024-11-18 13:30:59.537411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:18:03.526 [2024-11-18 13:30:59.537421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.543603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.543730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:03.526 [2024-11-18 13:30:59.543745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:18:03.526 [2024-11-18 13:30:59.543755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.545185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.545221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:03.526 [2024-11-18 13:30:59.545230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:18:03.526 [2024-11-18 13:30:59.545239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.548818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.548854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:03.526 [2024-11-18 13:30:59.548864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:18:03.526 [2024-11-18 13:30:59.548873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.548989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.549000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:03.526 [2024-11-18 13:30:59.549011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:03.526 [2024-11-18 13:30:59.549020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.526 [2024-11-18 13:30:59.550667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.526 [2024-11-18 13:30:59.550779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:03.526 [2024-11-18 13:30:59.550793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:18:03.527 [2024-11-18 13:30:59.550804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.527 [2024-11-18 13:30:59.551849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.527 [2024-11-18 13:30:59.551880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:03.527 [2024-11-18 13:30:59.551889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:18:03.527 [2024-11-18 13:30:59.551897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.527 [2024-11-18 13:30:59.552899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.527 [2024-11-18 13:30:59.552933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:03.527 [2024-11-18 13:30:59.552942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:18:03.527 [2024-11-18 13:30:59.552951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.527 [2024-11-18 13:30:59.553953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.527 [2024-11-18 13:30:59.554055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:03.527 [2024-11-18 13:30:59.554068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:18:03.527 [2024-11-18 13:30:59.554077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.527 [2024-11-18 13:30:59.554104] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:03.527 [2024-11-18 13:30:59.554118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:03.527 [2024-11-18 13:30:59.554786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:03.528 [2024-11-18 13:30:59.554982] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:03.528 [2024-11-18 13:30:59.554989] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3cc8cf29-3966-4e6d-a4c5-0d674196dd1d 00:18:03.528 [2024-11-18 13:30:59.554999] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:03.528 [2024-11-18 13:30:59.555006] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:03.528 [2024-11-18 13:30:59.555014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:03.528 [2024-11-18 13:30:59.555021] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:03.528 [2024-11-18 13:30:59.555029] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:03.528 [2024-11-18 13:30:59.555039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:03.528 [2024-11-18 13:30:59.555048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:03.528 [2024-11-18 13:30:59.555054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:03.528 [2024-11-18 13:30:59.555062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:03.528 [2024-11-18 13:30:59.555069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.528 [2024-11-18 13:30:59.555077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:03.528 [2024-11-18 13:30:59.555085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:18:03.528 [2024-11-18 13:30:59.555094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.556485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.528 [2024-11-18 13:30:59.556506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:03.528 [2024-11-18 13:30:59.556514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:18:03.528 [2024-11-18 13:30:59.556526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.556597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.528 [2024-11-18 13:30:59.556607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:03.528 [2024-11-18 13:30:59.556615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:03.528 [2024-11-18 13:30:59.556627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.561891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.561998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.528 [2024-11-18 13:30:59.562054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.562081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.562207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.562243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.528 [2024-11-18 13:30:59.562361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.562386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.562449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.562547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.528 [2024-11-18 13:30:59.562571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.562591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.562623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.562710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.528 [2024-11-18 13:30:59.562733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.562753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.571430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.571577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.528 [2024-11-18 13:30:59.571637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.571664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.578980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.579123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.528 [2024-11-18 13:30:59.579251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.579278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.579357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.579422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.528 [2024-11-18 13:30:59.579458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.579478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.579523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.579550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.528 [2024-11-18 13:30:59.579569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.579588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.579727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.579758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.528 [2024-11-18 13:30:59.579840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.579865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.579916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.579989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:03.528 [2024-11-18 13:30:59.580012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.580062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.580114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.580139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.528 [2024-11-18 13:30:59.580201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.580226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.580280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.528 [2024-11-18 13:30:59.580350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.528 [2024-11-18 13:30:59.580374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.528 [2024-11-18 13:30:59.580394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.528 [2024-11-18 13:30:59.580556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.256 ms, result 0 00:18:03.528 true 00:18:03.528 13:30:59 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85861 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85861 ']' 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85861 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85861 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.528 killing process with pid 85861 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85861' 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 85861 00:18:03.528 13:30:59 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 85861 00:18:08.795 13:31:04 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:12.113 262144+0 records in 00:18:12.113 262144+0 records out 00:18:12.113 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.5249 s, 305 MB/s 00:18:12.113 13:31:07 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:14.024 13:31:09 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:14.024 [2024-11-18 13:31:09.896724] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:18:14.024 [2024-11-18 13:31:09.896825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86063 ] 00:18:14.024 [2024-11-18 13:31:10.050162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.024 [2024-11-18 13:31:10.075992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.287 [2024-11-18 13:31:10.185511] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.287 [2024-11-18 13:31:10.185579] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.287 [2024-11-18 13:31:10.345310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.345356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:14.287 [2024-11-18 13:31:10.345370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:14.287 [2024-11-18 13:31:10.345378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.287 [2024-11-18 13:31:10.345427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.345437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.287 [2024-11-18 13:31:10.345446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:14.287 [2024-11-18 13:31:10.345453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.287 [2024-11-18 13:31:10.345473] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:14.287 [2024-11-18 13:31:10.345719] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:14.287 [2024-11-18 13:31:10.345738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.345747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.287 [2024-11-18 13:31:10.345760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:18:14.287 [2024-11-18 13:31:10.345770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.287 [2024-11-18 13:31:10.347247] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:14.287 [2024-11-18 13:31:10.350544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.350580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:14.287 [2024-11-18 13:31:10.350591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:18:14.287 [2024-11-18 13:31:10.350604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.287 [2024-11-18 13:31:10.350666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.350682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:14.287 [2024-11-18 13:31:10.350691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:14.287 [2024-11-18 13:31:10.350698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.287 [2024-11-18 13:31:10.358300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.358467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.287 [2024-11-18 13:31:10.358835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.557 ms 00:18:14.287 [2024-11-18 13:31:10.358880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.287 [2024-11-18 13:31:10.359045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.359061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.287 [2024-11-18 13:31:10.359071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:14.287 [2024-11-18 13:31:10.359081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.287 [2024-11-18 13:31:10.359151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.287 [2024-11-18 13:31:10.359163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:14.287 [2024-11-18 13:31:10.359213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:14.288 [2024-11-18 13:31:10.359220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.288 [2024-11-18 13:31:10.359256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:14.288 [2024-11-18 13:31:10.361088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.288 [2024-11-18 13:31:10.361120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.288 [2024-11-18 13:31:10.361130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:18:14.288 [2024-11-18 13:31:10.361144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.288 [2024-11-18 13:31:10.361197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.288 [2024-11-18 13:31:10.361207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:14.288 [2024-11-18 13:31:10.361216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:14.288 [2024-11-18 13:31:10.361224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.288 [2024-11-18 13:31:10.361258] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:14.288 [2024-11-18 13:31:10.361280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:14.288 [2024-11-18 13:31:10.361316] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:14.288 [2024-11-18 13:31:10.361333] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:14.288 [2024-11-18 13:31:10.361439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:14.288 [2024-11-18 13:31:10.361452] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:14.288 [2024-11-18 13:31:10.361463] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:14.288 [2024-11-18 13:31:10.361482] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:14.288 [2024-11-18 13:31:10.361491] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:14.288 [2024-11-18 13:31:10.361499] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:14.288 [2024-11-18 13:31:10.361506] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:14.288 [2024-11-18 13:31:10.361514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:14.288 [2024-11-18 13:31:10.361522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:14.288 [2024-11-18 13:31:10.361530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.288 [2024-11-18 13:31:10.361537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:14.288 [2024-11-18 13:31:10.361545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:18:14.288 [2024-11-18 13:31:10.361554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.288 [2024-11-18 13:31:10.361638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.288 [2024-11-18 13:31:10.361655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:14.288 [2024-11-18 13:31:10.361664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:14.288 [2024-11-18 13:31:10.361671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.288 [2024-11-18 13:31:10.361771] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:14.288 [2024-11-18 13:31:10.361784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:14.288 [2024-11-18 13:31:10.361793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.288 [2024-11-18 13:31:10.361802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.288 [2024-11-18 13:31:10.361812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:14.288 [2024-11-18 13:31:10.361827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:14.288 [2024-11-18 13:31:10.361836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:14.288 [2024-11-18 13:31:10.361844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:14.288 [2024-11-18 13:31:10.361852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:14.288 [2024-11-18 13:31:10.361859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.288 [2024-11-18 13:31:10.361870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:14.288 [2024-11-18 13:31:10.361879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:14.288 [2024-11-18 13:31:10.361886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.288 [2024-11-18 13:31:10.361894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:14.288 [2024-11-18 13:31:10.361902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:14.288 [2024-11-18 13:31:10.361912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.288 [2024-11-18 13:31:10.361920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:14.288 [2024-11-18 13:31:10.361929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:14.288 [2024-11-18 13:31:10.361937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.288 [2024-11-18 13:31:10.361946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:14.288 [2024-11-18 13:31:10.361954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:14.288 [2024-11-18 13:31:10.361961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.288 [2024-11-18 13:31:10.361969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:14.288 [2024-11-18 13:31:10.361977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:14.288 [2024-11-18 13:31:10.361985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.288 [2024-11-18 13:31:10.361993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:14.288 [2024-11-18 13:31:10.362005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:14.288 [2024-11-18 13:31:10.362013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.288 [2024-11-18 13:31:10.362022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:14.288 [2024-11-18 13:31:10.362030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:14.288 [2024-11-18 13:31:10.362038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.288 [2024-11-18 13:31:10.362046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:14.288 [2024-11-18 13:31:10.362053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:14.288 [2024-11-18 13:31:10.362061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.288 [2024-11-18 13:31:10.362068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:14.288 [2024-11-18 13:31:10.362076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:14.288 [2024-11-18 13:31:10.362084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.288 [2024-11-18 13:31:10.362091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:14.288 [2024-11-18 13:31:10.362098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:14.288 [2024-11-18 13:31:10.362105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.289 [2024-11-18 13:31:10.362112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:14.289 [2024-11-18 13:31:10.362119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:14.289 [2024-11-18 13:31:10.362130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.289 [2024-11-18 13:31:10.362137] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:14.289 [2024-11-18 13:31:10.362145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:14.289 [2024-11-18 13:31:10.362155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.289 [2024-11-18 13:31:10.362163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.289 [2024-11-18 13:31:10.362365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:14.289 [2024-11-18 13:31:10.362386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:14.289 [2024-11-18 13:31:10.362406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:14.289 [2024-11-18 13:31:10.362424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:14.289 [2024-11-18 13:31:10.362444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:14.289 [2024-11-18 13:31:10.362463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:14.289 [2024-11-18 13:31:10.362483] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:14.289 [2024-11-18 13:31:10.362515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.289 [2024-11-18 13:31:10.362584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:14.289 [2024-11-18 13:31:10.362618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:14.289 [2024-11-18 13:31:10.362647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:14.289 [2024-11-18 13:31:10.363019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:14.289 [2024-11-18 13:31:10.363042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:14.289 [2024-11-18 13:31:10.363051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:14.289 [2024-11-18 13:31:10.363059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:14.289 [2024-11-18 13:31:10.363067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:14.289 [2024-11-18 13:31:10.363075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:14.289 [2024-11-18 13:31:10.363083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:14.289 [2024-11-18 13:31:10.363090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:14.289 [2024-11-18 13:31:10.363097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:14.289 [2024-11-18 13:31:10.363105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:14.289 [2024-11-18 13:31:10.363112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:14.289 [2024-11-18 13:31:10.363120] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:14.289 [2024-11-18 13:31:10.363129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.289 [2024-11-18 13:31:10.363152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:14.289 [2024-11-18 13:31:10.363160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:14.289 [2024-11-18 13:31:10.363181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:14.289 [2024-11-18 13:31:10.363192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:14.289 [2024-11-18 13:31:10.363203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.363213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:14.289 [2024-11-18 13:31:10.363225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:18:14.289 [2024-11-18 13:31:10.363233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.376762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.376890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.289 [2024-11-18 13:31:10.376943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.443 ms 00:18:14.289 [2024-11-18 13:31:10.376967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.377069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.377092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:14.289 [2024-11-18 13:31:10.377112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:14.289 [2024-11-18 13:31:10.377132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.397027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.397223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.289 [2024-11-18 13:31:10.397334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.814 ms 00:18:14.289 [2024-11-18 13:31:10.397365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.397436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.397466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.289 [2024-11-18 13:31:10.397491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:14.289 [2024-11-18 13:31:10.397661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.398230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.398349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.289 [2024-11-18 13:31:10.398492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:18:14.289 [2024-11-18 13:31:10.398530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.398715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.398744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.289 [2024-11-18 13:31:10.399189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:18:14.289 [2024-11-18 13:31:10.399245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.407093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.289 [2024-11-18 13:31:10.407248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.289 [2024-11-18 13:31:10.407316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.787 ms 00:18:14.289 [2024-11-18 13:31:10.407346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.289 [2024-11-18 13:31:10.410971] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:14.551 [2024-11-18 13:31:10.411106] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:14.551 [2024-11-18 13:31:10.411187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.551 [2024-11-18 13:31:10.411211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:14.551 [2024-11-18 13:31:10.411232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.715 ms 00:18:14.551 [2024-11-18 13:31:10.411252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.551 [2024-11-18 13:31:10.426585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.551 [2024-11-18 13:31:10.426705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:14.551 [2024-11-18 13:31:10.426765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.281 ms 00:18:14.551 [2024-11-18 13:31:10.426787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.551 [2024-11-18 13:31:10.429111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.551 [2024-11-18 13:31:10.429248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:14.551 [2024-11-18 13:31:10.429301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:18:14.551 [2024-11-18 13:31:10.429323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.551 [2024-11-18 13:31:10.431541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.551 [2024-11-18 13:31:10.431664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:14.551 [2024-11-18 13:31:10.431713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:18:14.551 [2024-11-18 13:31:10.431737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.551 [2024-11-18 13:31:10.432197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.551 [2024-11-18 13:31:10.432305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:14.551 [2024-11-18 13:31:10.432362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:18:14.551 [2024-11-18 13:31:10.432385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.551 [2024-11-18 13:31:10.455735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.551 [2024-11-18 13:31:10.455899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:14.551 [2024-11-18 13:31:10.455953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.040 ms 00:18:14.551 [2024-11-18 13:31:10.455976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.551 [2024-11-18 13:31:10.464085] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:14.551 [2024-11-18 13:31:10.467763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.551 [2024-11-18 13:31:10.467878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:14.551 [2024-11-18 13:31:10.467938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.731 ms 00:18:14.551 [2024-11-18 13:31:10.467965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.551 [2024-11-18 13:31:10.468063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.552 [2024-11-18 13:31:10.468092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:14.552 [2024-11-18 13:31:10.468114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:14.552 [2024-11-18 13:31:10.468133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.552 [2024-11-18 13:31:10.468305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.552 [2024-11-18 13:31:10.468320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:14.552 [2024-11-18 13:31:10.468330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:14.552 [2024-11-18 13:31:10.468344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.552 [2024-11-18 13:31:10.468369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.552 [2024-11-18 13:31:10.468379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:14.552 [2024-11-18 13:31:10.468388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:14.552 [2024-11-18 13:31:10.468396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.552 [2024-11-18 13:31:10.468433] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:14.552 [2024-11-18 13:31:10.468445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.552 [2024-11-18 13:31:10.468458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:14.552 [2024-11-18 13:31:10.468467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:14.552 [2024-11-18 13:31:10.468475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.552 [2024-11-18 13:31:10.473404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.552 [2024-11-18 13:31:10.473525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:14.552 [2024-11-18 13:31:10.473575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.905 ms 00:18:14.552 [2024-11-18 13:31:10.473598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.552 [2024-11-18 13:31:10.473683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.552 [2024-11-18 13:31:10.473709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:14.552 [2024-11-18 13:31:10.473733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:14.552 [2024-11-18 13:31:10.473752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.552 [2024-11-18 13:31:10.474944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.170 ms, result 0 00:18:15.493  [2024-11-18T13:31:12.565Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-18T13:31:13.509Z] Copying: 32/1024 [MB] (14 MBps) [2024-11-18T13:31:14.895Z] Copying: 51/1024 [MB] (18 MBps) [2024-11-18T13:31:15.840Z] Copying: 64/1024 [MB] (13 MBps) [2024-11-18T13:31:16.781Z] Copying: 85/1024 [MB] (20 MBps) [2024-11-18T13:31:17.726Z] Copying: 99/1024 [MB] (13 MBps) [2024-11-18T13:31:18.671Z] Copying: 112/1024 [MB] (12 MBps) [2024-11-18T13:31:19.612Z] Copying: 127/1024 [MB] (15 MBps) [2024-11-18T13:31:20.551Z] Copying: 143/1024 [MB] (16 MBps) [2024-11-18T13:31:21.493Z] Copying: 156/1024 [MB] (13 MBps) [2024-11-18T13:31:22.876Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-18T13:31:23.817Z] Copying: 183/1024 [MB] (16 MBps) [2024-11-18T13:31:24.760Z] Copying: 193/1024 [MB] (10 MBps) [2024-11-18T13:31:25.703Z] Copying: 207/1024 [MB] (14 MBps) [2024-11-18T13:31:26.642Z] Copying: 223120/1048576 [kB] (10140 kBps) [2024-11-18T13:31:27.578Z] Copying: 233288/1048576 [kB] (10168 kBps) [2024-11-18T13:31:28.517Z] Copying: 248/1024 [MB] (20 MBps) [2024-11-18T13:31:29.897Z] Copying: 268/1024 [MB] (19 MBps) [2024-11-18T13:31:30.836Z] Copying: 282/1024 [MB] (14 MBps) [2024-11-18T13:31:31.778Z] Copying: 305/1024 [MB] (23 MBps) [2024-11-18T13:31:32.718Z] Copying: 320/1024 [MB] (14 MBps) [2024-11-18T13:31:33.662Z] Copying: 339/1024 [MB] (18 MBps) [2024-11-18T13:31:34.604Z] Copying: 354/1024 [MB] (15 MBps) [2024-11-18T13:31:35.545Z] Copying: 367/1024 [MB] (12 MBps) [2024-11-18T13:31:36.929Z] Copying: 384/1024 [MB] (17 MBps) [2024-11-18T13:31:37.554Z] Copying: 400/1024 [MB] (16 MBps) [2024-11-18T13:31:38.518Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-18T13:31:39.904Z] Copying: 427/1024 [MB] (15 MBps) [2024-11-18T13:31:40.847Z] Copying: 445/1024 [MB] (17 MBps) [2024-11-18T13:31:41.788Z] Copying: 459/1024 [MB] (14 MBps) [2024-11-18T13:31:42.731Z] Copying: 479/1024 [MB] (20 MBps) [2024-11-18T13:31:43.673Z] Copying: 491/1024 [MB] (11 MBps) [2024-11-18T13:31:44.612Z] Copying: 502/1024 [MB] (11 MBps) [2024-11-18T13:31:45.554Z] Copying: 513/1024 [MB] (11 MBps) [2024-11-18T13:31:46.496Z] Copying: 524/1024 [MB] (10 MBps) [2024-11-18T13:31:47.881Z] Copying: 534/1024 [MB] (10 MBps) [2024-11-18T13:31:48.824Z] Copying: 545/1024 [MB] (11 MBps) [2024-11-18T13:31:49.766Z] Copying: 556/1024 [MB] (10 MBps) [2024-11-18T13:31:50.707Z] Copying: 579812/1048576 [kB] (10068 kBps) [2024-11-18T13:31:51.652Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-18T13:31:52.594Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-18T13:31:53.534Z] Copying: 598/1024 [MB] (10 MBps) [2024-11-18T13:31:54.920Z] Copying: 613/1024 [MB] (14 MBps) [2024-11-18T13:31:55.493Z] Copying: 624/1024 [MB] (11 MBps) [2024-11-18T13:31:56.881Z] Copying: 649360/1048576 [kB] (10136 kBps) [2024-11-18T13:31:57.825Z] Copying: 659320/1048576 [kB] (9960 kBps) [2024-11-18T13:31:58.767Z] Copying: 654/1024 [MB] (10 MBps) [2024-11-18T13:31:59.713Z] Copying: 665/1024 [MB] (11 MBps) [2024-11-18T13:32:00.657Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-18T13:32:01.597Z] Copying: 688/1024 [MB] (11 MBps) [2024-11-18T13:32:02.539Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-18T13:32:03.923Z] Copying: 711/1024 [MB] (11 MBps) [2024-11-18T13:32:04.495Z] Copying: 724/1024 [MB] (12 MBps) [2024-11-18T13:32:05.882Z] Copying: 735/1024 [MB] (11 MBps) [2024-11-18T13:32:06.827Z] Copying: 750/1024 [MB] (15 MBps) [2024-11-18T13:32:07.771Z] Copying: 762/1024 [MB] (11 MBps) [2024-11-18T13:32:08.714Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-18T13:32:09.684Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-18T13:32:10.650Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-18T13:32:11.595Z] Copying: 807/1024 [MB] (11 MBps) [2024-11-18T13:32:12.538Z] Copying: 823/1024 [MB] (15 MBps) [2024-11-18T13:32:13.922Z] Copying: 837/1024 [MB] (14 MBps) [2024-11-18T13:32:14.492Z] Copying: 852/1024 [MB] (15 MBps) [2024-11-18T13:32:15.878Z] Copying: 864/1024 [MB] (11 MBps) [2024-11-18T13:32:16.820Z] Copying: 875/1024 [MB] (11 MBps) [2024-11-18T13:32:17.763Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-18T13:32:18.706Z] Copying: 898/1024 [MB] (11 MBps) [2024-11-18T13:32:19.650Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-18T13:32:20.595Z] Copying: 920/1024 [MB] (11 MBps) [2024-11-18T13:32:21.535Z] Copying: 932/1024 [MB] (11 MBps) [2024-11-18T13:32:22.919Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-18T13:32:23.491Z] Copying: 955/1024 [MB] (11 MBps) [2024-11-18T13:32:24.877Z] Copying: 965/1024 [MB] (10 MBps) [2024-11-18T13:32:25.817Z] Copying: 999116/1048576 [kB] (10176 kBps) [2024-11-18T13:32:26.762Z] Copying: 986/1024 [MB] (10 MBps) [2024-11-18T13:32:27.703Z] Copying: 997/1024 [MB] (11 MBps) [2024-11-18T13:32:28.646Z] Copying: 1008/1024 [MB] (10 MBps) [2024-11-18T13:32:28.909Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-18T13:32:28.909Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-18 13:32:28.808613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.781 [2024-11-18 13:32:28.808670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:32.781 [2024-11-18 13:32:28.808683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:32.782 [2024-11-18 13:32:28.808693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.808715] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.782 [2024-11-18 13:32:28.809260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.809275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:32.782 [2024-11-18 13:32:28.809289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:19:32.782 [2024-11-18 13:32:28.809298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.811355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.811382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:32.782 [2024-11-18 13:32:28.811389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:19:32.782 [2024-11-18 13:32:28.811395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.828181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.828213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:32.782 [2024-11-18 13:32:28.828222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.774 ms 00:19:32.782 [2024-11-18 13:32:28.828228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.832864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.832885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:32.782 [2024-11-18 13:32:28.832892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:19:32.782 [2024-11-18 13:32:28.832899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.834761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.834880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:32.782 [2024-11-18 13:32:28.834892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:19:32.782 [2024-11-18 13:32:28.834898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.838490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.838545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:32.782 [2024-11-18 13:32:28.838558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:19:32.782 [2024-11-18 13:32:28.838581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.838756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.838776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:32.782 [2024-11-18 13:32:28.838786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:32.782 [2024-11-18 13:32:28.838794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.841413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.841547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:32.782 [2024-11-18 13:32:28.841563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:19:32.782 [2024-11-18 13:32:28.841570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.843608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.843638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:32.782 [2024-11-18 13:32:28.843647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:19:32.782 [2024-11-18 13:32:28.843654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.845463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.845496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:32.782 [2024-11-18 13:32:28.845504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:19:32.782 [2024-11-18 13:32:28.845511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.847197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.782 [2024-11-18 13:32:28.847226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:32.782 [2024-11-18 13:32:28.847234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:19:32.782 [2024-11-18 13:32:28.847241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.782 [2024-11-18 13:32:28.847268] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:32.782 [2024-11-18 13:32:28.847287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:32.782 [2024-11-18 13:32:28.847653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.847999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.848006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:32.783 [2024-11-18 13:32:28.848022] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:32.783 [2024-11-18 13:32:28.848034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3cc8cf29-3966-4e6d-a4c5-0d674196dd1d 00:19:32.783 [2024-11-18 13:32:28.848042] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:32.783 [2024-11-18 13:32:28.848049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:32.783 [2024-11-18 13:32:28.848056] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:32.783 [2024-11-18 13:32:28.848063] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:32.783 [2024-11-18 13:32:28.848071] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:32.783 [2024-11-18 13:32:28.848078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:32.783 [2024-11-18 13:32:28.848086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:32.783 [2024-11-18 13:32:28.848092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:32.783 [2024-11-18 13:32:28.848098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:32.783 [2024-11-18 13:32:28.848106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.783 [2024-11-18 13:32:28.848121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:32.783 [2024-11-18 13:32:28.848134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:19:32.783 [2024-11-18 13:32:28.848141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.849518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.783 [2024-11-18 13:32:28.849540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:32.783 [2024-11-18 13:32:28.849549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:19:32.783 [2024-11-18 13:32:28.849557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.849631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.783 [2024-11-18 13:32:28.849643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:32.783 [2024-11-18 13:32:28.849651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:32.783 [2024-11-18 13:32:28.849659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.854399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.783 [2024-11-18 13:32:28.854432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.783 [2024-11-18 13:32:28.854441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.783 [2024-11-18 13:32:28.854448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.854507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.783 [2024-11-18 13:32:28.854518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.783 [2024-11-18 13:32:28.854526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.783 [2024-11-18 13:32:28.854533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.854582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.783 [2024-11-18 13:32:28.854591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.783 [2024-11-18 13:32:28.854598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.783 [2024-11-18 13:32:28.854605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.854620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.783 [2024-11-18 13:32:28.854627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.783 [2024-11-18 13:32:28.854636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.783 [2024-11-18 13:32:28.854644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.863378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.783 [2024-11-18 13:32:28.863413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.783 [2024-11-18 13:32:28.863423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.783 [2024-11-18 13:32:28.863430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.783 [2024-11-18 13:32:28.870083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.783 [2024-11-18 13:32:28.870116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.783 [2024-11-18 13:32:28.870131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.783 [2024-11-18 13:32:28.870138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.784 [2024-11-18 13:32:28.870158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.784 [2024-11-18 13:32:28.870337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.784 [2024-11-18 13:32:28.870367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.784 [2024-11-18 13:32:28.870388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.784 [2024-11-18 13:32:28.870442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.784 [2024-11-18 13:32:28.870453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.784 [2024-11-18 13:32:28.870460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.784 [2024-11-18 13:32:28.870472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.784 [2024-11-18 13:32:28.870544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.784 [2024-11-18 13:32:28.870554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.784 [2024-11-18 13:32:28.870561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.784 [2024-11-18 13:32:28.870569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.784 [2024-11-18 13:32:28.870596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.784 [2024-11-18 13:32:28.870604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:32.784 [2024-11-18 13:32:28.870612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.784 [2024-11-18 13:32:28.870620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.784 [2024-11-18 13:32:28.870655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.784 [2024-11-18 13:32:28.870664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.784 [2024-11-18 13:32:28.870672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.784 [2024-11-18 13:32:28.870680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.784 [2024-11-18 13:32:28.870722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.784 [2024-11-18 13:32:28.870732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.784 [2024-11-18 13:32:28.870740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.784 [2024-11-18 13:32:28.870749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.784 [2024-11-18 13:32:28.870858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.218 ms, result 0 00:19:33.044 00:19:33.044 00:19:33.044 13:32:29 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:33.044 [2024-11-18 13:32:29.114816] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:19:33.044 [2024-11-18 13:32:29.114940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86883 ] 00:19:33.305 [2024-11-18 13:32:29.272371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.305 [2024-11-18 13:32:29.296030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.305 [2024-11-18 13:32:29.408510] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.305 [2024-11-18 13:32:29.408588] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.568 [2024-11-18 13:32:29.569997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.568 [2024-11-18 13:32:29.570063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:33.568 [2024-11-18 13:32:29.570079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:33.568 [2024-11-18 13:32:29.570087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.568 [2024-11-18 13:32:29.570144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.568 [2024-11-18 13:32:29.570155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.568 [2024-11-18 13:32:29.570185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:33.568 [2024-11-18 13:32:29.570194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.568 [2024-11-18 13:32:29.570220] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:33.568 [2024-11-18 13:32:29.570518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:33.568 [2024-11-18 13:32:29.570539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.568 [2024-11-18 13:32:29.570552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.568 [2024-11-18 13:32:29.570561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:33.568 [2024-11-18 13:32:29.570572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.568 [2024-11-18 13:32:29.572328] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:33.568 [2024-11-18 13:32:29.576187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.568 [2024-11-18 13:32:29.576235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:33.568 [2024-11-18 13:32:29.576247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.862 ms 00:19:33.568 [2024-11-18 13:32:29.576262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.568 [2024-11-18 13:32:29.576345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.568 [2024-11-18 13:32:29.576359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:33.568 [2024-11-18 13:32:29.576369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:33.568 [2024-11-18 13:32:29.576377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.584564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.569 [2024-11-18 13:32:29.584620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.569 [2024-11-18 13:32:29.584634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.147 ms 00:19:33.569 [2024-11-18 13:32:29.584642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.584740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.569 [2024-11-18 13:32:29.584750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.569 [2024-11-18 13:32:29.584759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:33.569 [2024-11-18 13:32:29.584770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.584833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.569 [2024-11-18 13:32:29.584849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:33.569 [2024-11-18 13:32:29.584859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:33.569 [2024-11-18 13:32:29.584866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.584891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.569 [2024-11-18 13:32:29.587001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.569 [2024-11-18 13:32:29.587044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.569 [2024-11-18 13:32:29.587055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:19:33.569 [2024-11-18 13:32:29.587062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.587096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.569 [2024-11-18 13:32:29.587126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:33.569 [2024-11-18 13:32:29.587140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:33.569 [2024-11-18 13:32:29.587148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.587197] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:33.569 [2024-11-18 13:32:29.587219] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:33.569 [2024-11-18 13:32:29.587256] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:33.569 [2024-11-18 13:32:29.587273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:33.569 [2024-11-18 13:32:29.587378] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:33.569 [2024-11-18 13:32:29.587389] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:33.569 [2024-11-18 13:32:29.587400] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:33.569 [2024-11-18 13:32:29.587415] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587424] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587437] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:33.569 [2024-11-18 13:32:29.587444] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:33.569 [2024-11-18 13:32:29.587452] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:33.569 [2024-11-18 13:32:29.587459] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:33.569 [2024-11-18 13:32:29.587468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.569 [2024-11-18 13:32:29.587475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:33.569 [2024-11-18 13:32:29.587484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:19:33.569 [2024-11-18 13:32:29.587494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.587580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.569 [2024-11-18 13:32:29.587593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:33.569 [2024-11-18 13:32:29.587600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:33.569 [2024-11-18 13:32:29.587612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.569 [2024-11-18 13:32:29.587713] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:33.569 [2024-11-18 13:32:29.587725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:33.569 [2024-11-18 13:32:29.587735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:33.569 [2024-11-18 13:32:29.587767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:33.569 [2024-11-18 13:32:29.587792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.569 [2024-11-18 13:32:29.587811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:33.569 [2024-11-18 13:32:29.587819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:33.569 [2024-11-18 13:32:29.587827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.569 [2024-11-18 13:32:29.587837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:33.569 [2024-11-18 13:32:29.587845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:33.569 [2024-11-18 13:32:29.587853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:33.569 [2024-11-18 13:32:29.587869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:33.569 [2024-11-18 13:32:29.587893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:33.569 [2024-11-18 13:32:29.587918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:33.569 [2024-11-18 13:32:29.587947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:33.569 [2024-11-18 13:32:29.587971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:33.569 [2024-11-18 13:32:29.587978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.569 [2024-11-18 13:32:29.587986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:33.569 [2024-11-18 13:32:29.587994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:33.570 [2024-11-18 13:32:29.588003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.570 [2024-11-18 13:32:29.588010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:33.570 [2024-11-18 13:32:29.588018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:33.570 [2024-11-18 13:32:29.588025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.570 [2024-11-18 13:32:29.588033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:33.570 [2024-11-18 13:32:29.588040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:33.570 [2024-11-18 13:32:29.588047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.570 [2024-11-18 13:32:29.588055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:33.570 [2024-11-18 13:32:29.588062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:33.570 [2024-11-18 13:32:29.588072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.570 [2024-11-18 13:32:29.588079] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:33.570 [2024-11-18 13:32:29.588092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:33.570 [2024-11-18 13:32:29.588105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.570 [2024-11-18 13:32:29.588114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.570 [2024-11-18 13:32:29.588127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:33.570 [2024-11-18 13:32:29.588135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:33.570 [2024-11-18 13:32:29.588143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:33.570 [2024-11-18 13:32:29.588150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:33.570 [2024-11-18 13:32:29.588156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:33.570 [2024-11-18 13:32:29.588178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:33.570 [2024-11-18 13:32:29.588188] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:33.570 [2024-11-18 13:32:29.588197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.570 [2024-11-18 13:32:29.588206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:33.570 [2024-11-18 13:32:29.588214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:33.570 [2024-11-18 13:32:29.588220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:33.570 [2024-11-18 13:32:29.588231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:33.570 [2024-11-18 13:32:29.588239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:33.570 [2024-11-18 13:32:29.588247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:33.570 [2024-11-18 13:32:29.588253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:33.570 [2024-11-18 13:32:29.588260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:33.570 [2024-11-18 13:32:29.588267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:33.570 [2024-11-18 13:32:29.588275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:33.570 [2024-11-18 13:32:29.588282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:33.570 [2024-11-18 13:32:29.588290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:33.570 [2024-11-18 13:32:29.588297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:33.570 [2024-11-18 13:32:29.588305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:33.570 [2024-11-18 13:32:29.588321] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:33.570 [2024-11-18 13:32:29.588333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.570 [2024-11-18 13:32:29.588342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:33.570 [2024-11-18 13:32:29.588349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:33.570 [2024-11-18 13:32:29.588356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:33.570 [2024-11-18 13:32:29.588365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:33.570 [2024-11-18 13:32:29.588373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.588382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:33.570 [2024-11-18 13:32:29.588391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:19:33.570 [2024-11-18 13:32:29.588399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.602825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.602874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.570 [2024-11-18 13:32:29.602889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.372 ms 00:19:33.570 [2024-11-18 13:32:29.602898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.602990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.603008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:33.570 [2024-11-18 13:32:29.603018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:33.570 [2024-11-18 13:32:29.603027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.622676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.622732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.570 [2024-11-18 13:32:29.622747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.590 ms 00:19:33.570 [2024-11-18 13:32:29.622755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.622803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.622813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.570 [2024-11-18 13:32:29.622828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:33.570 [2024-11-18 13:32:29.622840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.623437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.623466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.570 [2024-11-18 13:32:29.623478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:19:33.570 [2024-11-18 13:32:29.623487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.623649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.623668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.570 [2024-11-18 13:32:29.623679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:33.570 [2024-11-18 13:32:29.623688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.631380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.570 [2024-11-18 13:32:29.631428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.570 [2024-11-18 13:32:29.631446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.670 ms 00:19:33.570 [2024-11-18 13:32:29.631454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.570 [2024-11-18 13:32:29.635051] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:33.570 [2024-11-18 13:32:29.635099] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:33.571 [2024-11-18 13:32:29.635123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.571 [2024-11-18 13:32:29.635131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:33.571 [2024-11-18 13:32:29.635140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:19:33.571 [2024-11-18 13:32:29.635147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.571 [2024-11-18 13:32:29.650627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.571 [2024-11-18 13:32:29.650690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:33.571 [2024-11-18 13:32:29.650702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.403 ms 00:19:33.571 [2024-11-18 13:32:29.650710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.571 [2024-11-18 13:32:29.653639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.571 [2024-11-18 13:32:29.653821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:33.571 [2024-11-18 13:32:29.653840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:19:33.571 [2024-11-18 13:32:29.653848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.571 [2024-11-18 13:32:29.656238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.571 [2024-11-18 13:32:29.656280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:33.571 [2024-11-18 13:32:29.656290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:19:33.571 [2024-11-18 13:32:29.656298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.571 [2024-11-18 13:32:29.656658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.571 [2024-11-18 13:32:29.656671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:33.571 [2024-11-18 13:32:29.656680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:33.571 [2024-11-18 13:32:29.656692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.571 [2024-11-18 13:32:29.679090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.571 [2024-11-18 13:32:29.679185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:33.571 [2024-11-18 13:32:29.679199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.380 ms 00:19:33.571 [2024-11-18 13:32:29.679208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.571 [2024-11-18 13:32:29.687528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:33.571 [2024-11-18 13:32:29.690771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.832 [2024-11-18 13:32:29.690956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:33.832 [2024-11-18 13:32:29.690976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.500 ms 00:19:33.832 [2024-11-18 13:32:29.690990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.832 [2024-11-18 13:32:29.691076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.833 [2024-11-18 13:32:29.691087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:33.833 [2024-11-18 13:32:29.691097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:33.833 [2024-11-18 13:32:29.691115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.833 [2024-11-18 13:32:29.691207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.833 [2024-11-18 13:32:29.691219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:33.833 [2024-11-18 13:32:29.691232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:33.833 [2024-11-18 13:32:29.691241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.833 [2024-11-18 13:32:29.691261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.833 [2024-11-18 13:32:29.691269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:33.833 [2024-11-18 13:32:29.691277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:33.833 [2024-11-18 13:32:29.691285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.833 [2024-11-18 13:32:29.691323] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:33.833 [2024-11-18 13:32:29.691333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.833 [2024-11-18 13:32:29.691346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:33.833 [2024-11-18 13:32:29.691355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:33.833 [2024-11-18 13:32:29.691365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.833 [2024-11-18 13:32:29.696797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.833 [2024-11-18 13:32:29.696846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:33.833 [2024-11-18 13:32:29.696858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.413 ms 00:19:33.833 [2024-11-18 13:32:29.696867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.833 [2024-11-18 13:32:29.696950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.833 [2024-11-18 13:32:29.696961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:33.833 [2024-11-18 13:32:29.696973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:33.833 [2024-11-18 13:32:29.696986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.833 [2024-11-18 13:32:29.698100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.648 ms, result 0 00:19:34.778  [2024-11-18T13:32:32.290Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T13:32:33.231Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-18T13:32:34.176Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-18T13:32:35.120Z] Copying: 44/1024 [MB] (10 MBps) [2024-11-18T13:32:36.065Z] Copying: 55/1024 [MB] (10 MBps) [2024-11-18T13:32:37.009Z] Copying: 68/1024 [MB] (13 MBps) [2024-11-18T13:32:37.955Z] Copying: 80/1024 [MB] (11 MBps) [2024-11-18T13:32:38.900Z] Copying: 92/1024 [MB] (11 MBps) [2024-11-18T13:32:40.365Z] Copying: 103/1024 [MB] (10 MBps) [2024-11-18T13:32:40.959Z] Copying: 115/1024 [MB] (12 MBps) [2024-11-18T13:32:41.903Z] Copying: 136/1024 [MB] (21 MBps) [2024-11-18T13:32:43.292Z] Copying: 155/1024 [MB] (18 MBps) [2024-11-18T13:32:44.239Z] Copying: 178/1024 [MB] (22 MBps) [2024-11-18T13:32:45.184Z] Copying: 196/1024 [MB] (18 MBps) [2024-11-18T13:32:46.130Z] Copying: 209/1024 [MB] (12 MBps) [2024-11-18T13:32:47.072Z] Copying: 226/1024 [MB] (17 MBps) [2024-11-18T13:32:48.017Z] Copying: 242/1024 [MB] (16 MBps) [2024-11-18T13:32:48.963Z] Copying: 254/1024 [MB] (12 MBps) [2024-11-18T13:32:49.907Z] Copying: 274/1024 [MB] (19 MBps) [2024-11-18T13:32:51.293Z] Copying: 296/1024 [MB] (21 MBps) [2024-11-18T13:32:52.233Z] Copying: 312/1024 [MB] (16 MBps) [2024-11-18T13:32:53.175Z] Copying: 338/1024 [MB] (26 MBps) [2024-11-18T13:32:54.118Z] Copying: 368/1024 [MB] (29 MBps) [2024-11-18T13:32:55.063Z] Copying: 382/1024 [MB] (13 MBps) [2024-11-18T13:32:56.006Z] Copying: 401/1024 [MB] (19 MBps) [2024-11-18T13:32:56.951Z] Copying: 420/1024 [MB] (18 MBps) [2024-11-18T13:32:57.894Z] Copying: 435/1024 [MB] (15 MBps) [2024-11-18T13:32:59.282Z] Copying: 453/1024 [MB] (17 MBps) [2024-11-18T13:33:00.227Z] Copying: 467/1024 [MB] (14 MBps) [2024-11-18T13:33:01.177Z] Copying: 487/1024 [MB] (20 MBps) [2024-11-18T13:33:02.121Z] Copying: 507/1024 [MB] (19 MBps) [2024-11-18T13:33:03.066Z] Copying: 527/1024 [MB] (20 MBps) [2024-11-18T13:33:04.013Z] Copying: 553/1024 [MB] (25 MBps) [2024-11-18T13:33:04.958Z] Copying: 570/1024 [MB] (17 MBps) [2024-11-18T13:33:05.904Z] Copying: 591/1024 [MB] (21 MBps) [2024-11-18T13:33:07.290Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-18T13:33:08.235Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-18T13:33:09.179Z] Copying: 624/1024 [MB] (11 MBps) [2024-11-18T13:33:10.124Z] Copying: 636/1024 [MB] (11 MBps) [2024-11-18T13:33:11.068Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-18T13:33:12.016Z] Copying: 659/1024 [MB] (10 MBps) [2024-11-18T13:33:13.029Z] Copying: 669/1024 [MB] (10 MBps) [2024-11-18T13:33:13.973Z] Copying: 681/1024 [MB] (11 MBps) [2024-11-18T13:33:14.919Z] Copying: 692/1024 [MB] (11 MBps) [2024-11-18T13:33:16.307Z] Copying: 703/1024 [MB] (11 MBps) [2024-11-18T13:33:17.250Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-18T13:33:18.192Z] Copying: 726/1024 [MB] (11 MBps) [2024-11-18T13:33:19.135Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-18T13:33:20.078Z] Copying: 747/1024 [MB] (10 MBps) [2024-11-18T13:33:21.023Z] Copying: 759/1024 [MB] (11 MBps) [2024-11-18T13:33:21.966Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-18T13:33:22.909Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-18T13:33:24.298Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-18T13:33:25.245Z] Copying: 803/1024 [MB] (10 MBps) [2024-11-18T13:33:26.191Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-18T13:33:27.134Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-18T13:33:28.079Z] Copying: 836/1024 [MB] (11 MBps) [2024-11-18T13:33:29.023Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-18T13:33:29.966Z] Copying: 870/1024 [MB] (23 MBps) [2024-11-18T13:33:30.910Z] Copying: 888/1024 [MB] (17 MBps) [2024-11-18T13:33:32.297Z] Copying: 910/1024 [MB] (22 MBps) [2024-11-18T13:33:32.900Z] Copying: 928/1024 [MB] (18 MBps) [2024-11-18T13:33:34.287Z] Copying: 944/1024 [MB] (16 MBps) [2024-11-18T13:33:35.234Z] Copying: 962/1024 [MB] (17 MBps) [2024-11-18T13:33:36.180Z] Copying: 974/1024 [MB] (12 MBps) [2024-11-18T13:33:37.123Z] Copying: 991/1024 [MB] (16 MBps) [2024-11-18T13:33:38.070Z] Copying: 1010/1024 [MB] (18 MBps) [2024-11-18T13:33:38.070Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 13:33:37.852557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.852632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:41.942 [2024-11-18 13:33:37.852649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:41.942 [2024-11-18 13:33:37.852668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-11-18 13:33:37.852692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.942 [2024-11-18 13:33:37.853508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.853548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:41.942 [2024-11-18 13:33:37.853560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:20:41.942 [2024-11-18 13:33:37.853570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-11-18 13:33:37.853810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.853822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:41.942 [2024-11-18 13:33:37.853832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:20:41.942 [2024-11-18 13:33:37.853848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-11-18 13:33:37.857317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.857343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:41.942 [2024-11-18 13:33:37.857354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.452 ms 00:20:41.942 [2024-11-18 13:33:37.857362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-11-18 13:33:37.863534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.863572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:41.942 [2024-11-18 13:33:37.863583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.155 ms 00:20:41.942 [2024-11-18 13:33:37.863591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-11-18 13:33:37.866578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.866644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:41.942 [2024-11-18 13:33:37.866654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:20:41.942 [2024-11-18 13:33:37.866662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-11-18 13:33:37.871364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.871417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:41.942 [2024-11-18 13:33:37.871428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:20:41.942 [2024-11-18 13:33:37.871437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-11-18 13:33:37.871563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-11-18 13:33:37.871574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:41.943 [2024-11-18 13:33:37.871584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:41.943 [2024-11-18 13:33:37.871592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.943 [2024-11-18 13:33:37.874792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.943 [2024-11-18 13:33:37.874844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:41.943 [2024-11-18 13:33:37.874855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:20:41.943 [2024-11-18 13:33:37.874862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.943 [2024-11-18 13:33:37.877804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.943 [2024-11-18 13:33:37.877853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:41.943 [2024-11-18 13:33:37.877863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.897 ms 00:20:41.943 [2024-11-18 13:33:37.877870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.943 [2024-11-18 13:33:37.880516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.943 [2024-11-18 13:33:37.880571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:41.943 [2024-11-18 13:33:37.880582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.601 ms 00:20:41.943 [2024-11-18 13:33:37.880591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.943 [2024-11-18 13:33:37.882994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.943 [2024-11-18 13:33:37.883044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:41.943 [2024-11-18 13:33:37.883054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:20:41.943 [2024-11-18 13:33:37.883061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.943 [2024-11-18 13:33:37.883117] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:41.943 [2024-11-18 13:33:37.883133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:41.943 [2024-11-18 13:33:37.883703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:41.944 [2024-11-18 13:33:37.883918] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:41.944 [2024-11-18 13:33:37.883926] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3cc8cf29-3966-4e6d-a4c5-0d674196dd1d 00:20:41.944 [2024-11-18 13:33:37.883934] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:41.944 [2024-11-18 13:33:37.883942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:41.944 [2024-11-18 13:33:37.883950] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:41.944 [2024-11-18 13:33:37.883958] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:41.944 [2024-11-18 13:33:37.883966] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:41.944 [2024-11-18 13:33:37.883973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:41.944 [2024-11-18 13:33:37.883981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:41.944 [2024-11-18 13:33:37.883987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:41.944 [2024-11-18 13:33:37.883993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:41.944 [2024-11-18 13:33:37.884012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.944 [2024-11-18 13:33:37.884027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:41.944 [2024-11-18 13:33:37.884036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:20:41.944 [2024-11-18 13:33:37.884045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.886461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.944 [2024-11-18 13:33:37.886500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:41.944 [2024-11-18 13:33:37.886511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:20:41.944 [2024-11-18 13:33:37.886520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.886672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.944 [2024-11-18 13:33:37.886683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:41.944 [2024-11-18 13:33:37.886693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:41.944 [2024-11-18 13:33:37.886702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.894406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.894457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.944 [2024-11-18 13:33:37.894468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.894476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.894543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.894552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.944 [2024-11-18 13:33:37.894560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.894568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.894632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.894644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.944 [2024-11-18 13:33:37.894652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.894660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.894676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.894694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.944 [2024-11-18 13:33:37.894702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.894710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.908877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.908931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.944 [2024-11-18 13:33:37.908943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.908952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.919842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.919900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.944 [2024-11-18 13:33:37.919912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.919920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.919971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.919980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.944 [2024-11-18 13:33:37.919988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.919996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.920034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.920043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.944 [2024-11-18 13:33:37.920056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.920070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.920143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.920153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.944 [2024-11-18 13:33:37.920161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.920206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.920237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.920247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:41.944 [2024-11-18 13:33:37.920256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.920267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.920313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.920322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.944 [2024-11-18 13:33:37.920335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.920342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.920386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.944 [2024-11-18 13:33:37.920396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.944 [2024-11-18 13:33:37.920408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.944 [2024-11-18 13:33:37.920416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.944 [2024-11-18 13:33:37.920553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.958 ms, result 0 00:20:42.206 00:20:42.206 00:20:42.206 13:33:38 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:44.117 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:44.117 13:33:40 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:44.377 [2024-11-18 13:33:40.280787] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:20:44.378 [2024-11-18 13:33:40.280884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87625 ] 00:20:44.378 [2024-11-18 13:33:40.434088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.378 [2024-11-18 13:33:40.455892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.640 [2024-11-18 13:33:40.554682] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.640 [2024-11-18 13:33:40.554765] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.640 [2024-11-18 13:33:40.716263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.716332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:44.640 [2024-11-18 13:33:40.716352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:44.640 [2024-11-18 13:33:40.716361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.716418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.716429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.640 [2024-11-18 13:33:40.716438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:44.640 [2024-11-18 13:33:40.716447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.716471] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:44.640 [2024-11-18 13:33:40.716950] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:44.640 [2024-11-18 13:33:40.716993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.717006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.640 [2024-11-18 13:33:40.717019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:20:44.640 [2024-11-18 13:33:40.717030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.718898] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:44.640 [2024-11-18 13:33:40.723199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.723252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:44.640 [2024-11-18 13:33:40.723264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.303 ms 00:20:44.640 [2024-11-18 13:33:40.723280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.723358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.723372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:44.640 [2024-11-18 13:33:40.723382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:44.640 [2024-11-18 13:33:40.723390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.731970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.732020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.640 [2024-11-18 13:33:40.732039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.537 ms 00:20:44.640 [2024-11-18 13:33:40.732054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.732162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.732193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.640 [2024-11-18 13:33:40.732202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:44.640 [2024-11-18 13:33:40.732213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.732274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.732285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:44.640 [2024-11-18 13:33:40.732294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:44.640 [2024-11-18 13:33:40.732303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.732331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:44.640 [2024-11-18 13:33:40.734471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.734516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.640 [2024-11-18 13:33:40.734526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:20:44.640 [2024-11-18 13:33:40.734534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.734568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.734582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:44.640 [2024-11-18 13:33:40.734591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:44.640 [2024-11-18 13:33:40.734600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.734629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:44.640 [2024-11-18 13:33:40.734650] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:44.640 [2024-11-18 13:33:40.734687] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:44.640 [2024-11-18 13:33:40.734708] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:44.640 [2024-11-18 13:33:40.734816] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:44.640 [2024-11-18 13:33:40.734828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:44.640 [2024-11-18 13:33:40.734839] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:44.640 [2024-11-18 13:33:40.734853] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:44.640 [2024-11-18 13:33:40.734863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:44.640 [2024-11-18 13:33:40.734871] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:44.640 [2024-11-18 13:33:40.734879] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:44.640 [2024-11-18 13:33:40.734887] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:44.640 [2024-11-18 13:33:40.734895] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:44.640 [2024-11-18 13:33:40.734903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.734914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:44.640 [2024-11-18 13:33:40.734923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:44.640 [2024-11-18 13:33:40.734937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.735019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.640 [2024-11-18 13:33:40.735030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:44.640 [2024-11-18 13:33:40.735038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:44.640 [2024-11-18 13:33:40.735046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.640 [2024-11-18 13:33:40.735160] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:44.640 [2024-11-18 13:33:40.735189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:44.640 [2024-11-18 13:33:40.735199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.640 [2024-11-18 13:33:40.735208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.640 [2024-11-18 13:33:40.735217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:44.640 [2024-11-18 13:33:40.735232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:44.640 [2024-11-18 13:33:40.735241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:44.640 [2024-11-18 13:33:40.735249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:44.640 [2024-11-18 13:33:40.735257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:44.640 [2024-11-18 13:33:40.735264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.640 [2024-11-18 13:33:40.735276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:44.640 [2024-11-18 13:33:40.735283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:44.640 [2024-11-18 13:33:40.735291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.640 [2024-11-18 13:33:40.735299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:44.640 [2024-11-18 13:33:40.735306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:44.640 [2024-11-18 13:33:40.735314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.640 [2024-11-18 13:33:40.735322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:44.641 [2024-11-18 13:33:40.735330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:44.641 [2024-11-18 13:33:40.735337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:44.641 [2024-11-18 13:33:40.735356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.641 [2024-11-18 13:33:40.735372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:44.641 [2024-11-18 13:33:40.735379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.641 [2024-11-18 13:33:40.735395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:44.641 [2024-11-18 13:33:40.735410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.641 [2024-11-18 13:33:40.735425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:44.641 [2024-11-18 13:33:40.735434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.641 [2024-11-18 13:33:40.735452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:44.641 [2024-11-18 13:33:40.735459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.641 [2024-11-18 13:33:40.735474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:44.641 [2024-11-18 13:33:40.735482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:44.641 [2024-11-18 13:33:40.735489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.641 [2024-11-18 13:33:40.735497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:44.641 [2024-11-18 13:33:40.735504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:44.641 [2024-11-18 13:33:40.735511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:44.641 [2024-11-18 13:33:40.735526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:44.641 [2024-11-18 13:33:40.735537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735544] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:44.641 [2024-11-18 13:33:40.735551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:44.641 [2024-11-18 13:33:40.735561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.641 [2024-11-18 13:33:40.735569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.641 [2024-11-18 13:33:40.735577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:44.641 [2024-11-18 13:33:40.735583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:44.641 [2024-11-18 13:33:40.735590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:44.641 [2024-11-18 13:33:40.735597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:44.641 [2024-11-18 13:33:40.735606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:44.641 [2024-11-18 13:33:40.735613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:44.641 [2024-11-18 13:33:40.735622] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:44.641 [2024-11-18 13:33:40.735632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.641 [2024-11-18 13:33:40.735640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:44.641 [2024-11-18 13:33:40.735648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:44.641 [2024-11-18 13:33:40.735655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:44.641 [2024-11-18 13:33:40.735665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:44.641 [2024-11-18 13:33:40.735672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:44.641 [2024-11-18 13:33:40.735679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:44.641 [2024-11-18 13:33:40.735686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:44.641 [2024-11-18 13:33:40.735693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:44.641 [2024-11-18 13:33:40.735700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:44.641 [2024-11-18 13:33:40.735708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:44.641 [2024-11-18 13:33:40.735715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:44.641 [2024-11-18 13:33:40.735722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:44.641 [2024-11-18 13:33:40.735729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:44.641 [2024-11-18 13:33:40.735736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:44.641 [2024-11-18 13:33:40.735744] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:44.641 [2024-11-18 13:33:40.735752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.641 [2024-11-18 13:33:40.735760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:44.641 [2024-11-18 13:33:40.735767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:44.641 [2024-11-18 13:33:40.735774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:44.641 [2024-11-18 13:33:40.735784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:44.641 [2024-11-18 13:33:40.735792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.641 [2024-11-18 13:33:40.735800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:44.641 [2024-11-18 13:33:40.735808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:20:44.641 [2024-11-18 13:33:40.735819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.641 [2024-11-18 13:33:40.750094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.641 [2024-11-18 13:33:40.750148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.641 [2024-11-18 13:33:40.750162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.224 ms 00:20:44.641 [2024-11-18 13:33:40.750197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.641 [2024-11-18 13:33:40.750286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.641 [2024-11-18 13:33:40.750300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:44.641 [2024-11-18 13:33:40.750311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:44.641 [2024-11-18 13:33:40.750320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.902 [2024-11-18 13:33:40.770548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.902 [2024-11-18 13:33:40.770607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.902 [2024-11-18 13:33:40.770620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.168 ms 00:20:44.902 [2024-11-18 13:33:40.770629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.902 [2024-11-18 13:33:40.770677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.902 [2024-11-18 13:33:40.770688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.902 [2024-11-18 13:33:40.770702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:44.902 [2024-11-18 13:33:40.770713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.902 [2024-11-18 13:33:40.771314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.902 [2024-11-18 13:33:40.771349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.902 [2024-11-18 13:33:40.771362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:20:44.902 [2024-11-18 13:33:40.771372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.902 [2024-11-18 13:33:40.771531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.902 [2024-11-18 13:33:40.771543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.902 [2024-11-18 13:33:40.771553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:44.902 [2024-11-18 13:33:40.771563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.902 [2024-11-18 13:33:40.779498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.902 [2024-11-18 13:33:40.779549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.902 [2024-11-18 13:33:40.779566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.910 ms 00:20:44.902 [2024-11-18 13:33:40.779579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.902 [2024-11-18 13:33:40.782975] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:44.902 [2024-11-18 13:33:40.783027] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:44.902 [2024-11-18 13:33:40.783045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.902 [2024-11-18 13:33:40.783056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:44.902 [2024-11-18 13:33:40.783065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:20:44.902 [2024-11-18 13:33:40.783073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.902 [2024-11-18 13:33:40.799214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.902 [2024-11-18 13:33:40.799275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:44.903 [2024-11-18 13:33:40.799288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.069 ms 00:20:44.903 [2024-11-18 13:33:40.799296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.802061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.802115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:44.903 [2024-11-18 13:33:40.802126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:20:44.903 [2024-11-18 13:33:40.802133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.804728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.804777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:44.903 [2024-11-18 13:33:40.804788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:20:44.903 [2024-11-18 13:33:40.804795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.805190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.805213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:44.903 [2024-11-18 13:33:40.805223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:44.903 [2024-11-18 13:33:40.805231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.829927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.829996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:44.903 [2024-11-18 13:33:40.830011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.675 ms 00:20:44.903 [2024-11-18 13:33:40.830020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.838480] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:44.903 [2024-11-18 13:33:40.841923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.841977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:44.903 [2024-11-18 13:33:40.841990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.848 ms 00:20:44.903 [2024-11-18 13:33:40.842008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.842099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.842111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:44.903 [2024-11-18 13:33:40.842121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:44.903 [2024-11-18 13:33:40.842131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.842241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.842254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:44.903 [2024-11-18 13:33:40.842266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:44.903 [2024-11-18 13:33:40.842274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.842299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.842308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:44.903 [2024-11-18 13:33:40.842317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:44.903 [2024-11-18 13:33:40.842325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.842361] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:44.903 [2024-11-18 13:33:40.842371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.842380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:44.903 [2024-11-18 13:33:40.842388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:44.903 [2024-11-18 13:33:40.842402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.848255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.848312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:44.903 [2024-11-18 13:33:40.848323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:20:44.903 [2024-11-18 13:33:40.848332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.848418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.903 [2024-11-18 13:33:40.848428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:44.903 [2024-11-18 13:33:40.848441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:44.903 [2024-11-18 13:33:40.848449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.903 [2024-11-18 13:33:40.849645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.863 ms, result 0 00:20:45.844  [2024-11-18T13:33:42.913Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-18T13:33:43.897Z] Copying: 38/1024 [MB] (17 MBps) [2024-11-18T13:33:44.874Z] Copying: 66/1024 [MB] (27 MBps) [2024-11-18T13:33:46.259Z] Copying: 85/1024 [MB] (19 MBps) [2024-11-18T13:33:47.200Z] Copying: 106/1024 [MB] (21 MBps) [2024-11-18T13:33:48.142Z] Copying: 127/1024 [MB] (21 MBps) [2024-11-18T13:33:49.086Z] Copying: 148/1024 [MB] (21 MBps) [2024-11-18T13:33:50.030Z] Copying: 167/1024 [MB] (18 MBps) [2024-11-18T13:33:50.974Z] Copying: 181/1024 [MB] (13 MBps) [2024-11-18T13:33:51.918Z] Copying: 199/1024 [MB] (18 MBps) [2024-11-18T13:33:52.865Z] Copying: 216/1024 [MB] (16 MBps) [2024-11-18T13:33:54.253Z] Copying: 232/1024 [MB] (15 MBps) [2024-11-18T13:33:55.197Z] Copying: 249/1024 [MB] (17 MBps) [2024-11-18T13:33:56.142Z] Copying: 267/1024 [MB] (17 MBps) [2024-11-18T13:33:57.087Z] Copying: 282/1024 [MB] (15 MBps) [2024-11-18T13:33:58.033Z] Copying: 301/1024 [MB] (18 MBps) [2024-11-18T13:33:58.978Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-18T13:33:59.923Z] Copying: 322/1024 [MB] (10 MBps) [2024-11-18T13:34:00.868Z] Copying: 332/1024 [MB] (10 MBps) [2024-11-18T13:34:02.259Z] Copying: 343/1024 [MB] (10 MBps) [2024-11-18T13:34:03.205Z] Copying: 353/1024 [MB] (10 MBps) [2024-11-18T13:34:04.151Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-18T13:34:05.097Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-18T13:34:06.041Z] Copying: 387/1024 [MB] (12 MBps) [2024-11-18T13:34:06.987Z] Copying: 411/1024 [MB] (24 MBps) [2024-11-18T13:34:07.933Z] Copying: 429/1024 [MB] (17 MBps) [2024-11-18T13:34:08.880Z] Copying: 445/1024 [MB] (15 MBps) [2024-11-18T13:34:10.266Z] Copying: 460/1024 [MB] (15 MBps) [2024-11-18T13:34:11.209Z] Copying: 474/1024 [MB] (14 MBps) [2024-11-18T13:34:12.155Z] Copying: 491/1024 [MB] (17 MBps) [2024-11-18T13:34:13.102Z] Copying: 507/1024 [MB] (15 MBps) [2024-11-18T13:34:14.046Z] Copying: 523/1024 [MB] (15 MBps) [2024-11-18T13:34:14.991Z] Copying: 544/1024 [MB] (21 MBps) [2024-11-18T13:34:16.012Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-18T13:34:16.956Z] Copying: 567/1024 [MB] (13 MBps) [2024-11-18T13:34:17.901Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-18T13:34:19.291Z] Copying: 595/1024 [MB] (16 MBps) [2024-11-18T13:34:19.864Z] Copying: 608/1024 [MB] (13 MBps) [2024-11-18T13:34:21.251Z] Copying: 623/1024 [MB] (15 MBps) [2024-11-18T13:34:22.194Z] Copying: 638/1024 [MB] (14 MBps) [2024-11-18T13:34:23.139Z] Copying: 657/1024 [MB] (18 MBps) [2024-11-18T13:34:24.083Z] Copying: 673/1024 [MB] (16 MBps) [2024-11-18T13:34:25.027Z] Copying: 684/1024 [MB] (10 MBps) [2024-11-18T13:34:25.972Z] Copying: 710904/1048576 [kB] (10168 kBps) [2024-11-18T13:34:26.917Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-18T13:34:27.863Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-18T13:34:29.241Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-18T13:34:30.185Z] Copying: 756/1024 [MB] (31 MBps) [2024-11-18T13:34:31.131Z] Copying: 769/1024 [MB] (13 MBps) [2024-11-18T13:34:32.076Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-18T13:34:33.021Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-18T13:34:33.965Z] Copying: 801/1024 [MB] (11 MBps) [2024-11-18T13:34:34.908Z] Copying: 814/1024 [MB] (12 MBps) [2024-11-18T13:34:36.294Z] Copying: 830/1024 [MB] (16 MBps) [2024-11-18T13:34:36.868Z] Copying: 857/1024 [MB] (26 MBps) [2024-11-18T13:34:38.250Z] Copying: 884/1024 [MB] (26 MBps) [2024-11-18T13:34:39.194Z] Copying: 896/1024 [MB] (12 MBps) [2024-11-18T13:34:40.137Z] Copying: 913/1024 [MB] (16 MBps) [2024-11-18T13:34:41.081Z] Copying: 927/1024 [MB] (14 MBps) [2024-11-18T13:34:42.023Z] Copying: 942/1024 [MB] (15 MBps) [2024-11-18T13:34:42.963Z] Copying: 955/1024 [MB] (12 MBps) [2024-11-18T13:34:43.904Z] Copying: 967/1024 [MB] (12 MBps) [2024-11-18T13:34:45.291Z] Copying: 986/1024 [MB] (18 MBps) [2024-11-18T13:34:45.863Z] Copying: 1003/1024 [MB] (17 MBps) [2024-11-18T13:34:47.252Z] Copying: 1018/1024 [MB] (15 MBps) [2024-11-18T13:34:47.252Z] Copying: 1048420/1048576 [kB] (5216 kBps) [2024-11-18T13:34:47.252Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 13:34:47.020979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.124 [2024-11-18 13:34:47.021056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:51.124 [2024-11-18 13:34:47.021074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:51.124 [2024-11-18 13:34:47.021085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.124 [2024-11-18 13:34:47.021276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:51.124 [2024-11-18 13:34:47.025298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.124 [2024-11-18 13:34:47.025366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:51.124 [2024-11-18 13:34:47.025379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:21:51.124 [2024-11-18 13:34:47.025396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.124 [2024-11-18 13:34:47.034955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.124 [2024-11-18 13:34:47.035019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:51.124 [2024-11-18 13:34:47.035031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.619 ms 00:21:51.124 [2024-11-18 13:34:47.035040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.124 [2024-11-18 13:34:47.058969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.124 [2024-11-18 13:34:47.059021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:51.124 [2024-11-18 13:34:47.059033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.911 ms 00:21:51.124 [2024-11-18 13:34:47.059042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.124 [2024-11-18 13:34:47.065237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.124 [2024-11-18 13:34:47.065277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:51.124 [2024-11-18 13:34:47.065290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:21:51.124 [2024-11-18 13:34:47.065298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.124 [2024-11-18 13:34:47.068020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.124 [2024-11-18 13:34:47.068074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:51.124 [2024-11-18 13:34:47.068085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:21:51.124 [2024-11-18 13:34:47.068093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.124 [2024-11-18 13:34:47.072667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.124 [2024-11-18 13:34:47.072714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:51.124 [2024-11-18 13:34:47.072726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.533 ms 00:21:51.124 [2024-11-18 13:34:47.072734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.469 [2024-11-18 13:34:47.284738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.469 [2024-11-18 13:34:47.284805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:51.469 [2024-11-18 13:34:47.284819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 211.949 ms 00:21:51.469 [2024-11-18 13:34:47.284837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.469 [2024-11-18 13:34:47.287454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.469 [2024-11-18 13:34:47.287502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:51.469 [2024-11-18 13:34:47.287515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:21:51.469 [2024-11-18 13:34:47.287522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.469 [2024-11-18 13:34:47.289390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.469 [2024-11-18 13:34:47.289433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:51.469 [2024-11-18 13:34:47.289443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:21:51.469 [2024-11-18 13:34:47.289450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.469 [2024-11-18 13:34:47.291053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.469 [2024-11-18 13:34:47.291098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:51.469 [2024-11-18 13:34:47.291119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:21:51.469 [2024-11-18 13:34:47.291127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.469 [2024-11-18 13:34:47.292617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.469 [2024-11-18 13:34:47.292661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:51.469 [2024-11-18 13:34:47.292671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:21:51.469 [2024-11-18 13:34:47.292678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.469 [2024-11-18 13:34:47.292714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:51.469 [2024-11-18 13:34:47.292729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100608 / 261120 wr_cnt: 1 state: open 00:21:51.469 [2024-11-18 13:34:47.292740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:51.469 [2024-11-18 13:34:47.292833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.292992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:51.470 [2024-11-18 13:34:47.293551] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:51.470 [2024-11-18 13:34:47.293560] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3cc8cf29-3966-4e6d-a4c5-0d674196dd1d 00:21:51.470 [2024-11-18 13:34:47.293583] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100608 00:21:51.471 [2024-11-18 13:34:47.293592] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101568 00:21:51.471 [2024-11-18 13:34:47.293613] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100608 00:21:51.471 [2024-11-18 13:34:47.293625] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:21:51.471 [2024-11-18 13:34:47.293633] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:51.471 [2024-11-18 13:34:47.293642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:51.471 [2024-11-18 13:34:47.293650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:51.471 [2024-11-18 13:34:47.293657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:51.471 [2024-11-18 13:34:47.293665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:51.471 [2024-11-18 13:34:47.293678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.471 [2024-11-18 13:34:47.293687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:51.471 [2024-11-18 13:34:47.293696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:21:51.471 [2024-11-18 13:34:47.293703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.295966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.471 [2024-11-18 13:34:47.296011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:51.471 [2024-11-18 13:34:47.296021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:21:51.471 [2024-11-18 13:34:47.296033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.296158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.471 [2024-11-18 13:34:47.296190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:51.471 [2024-11-18 13:34:47.296203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:51.471 [2024-11-18 13:34:47.296212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.303603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.303650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.471 [2024-11-18 13:34:47.303662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.303669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.303728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.303736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.471 [2024-11-18 13:34:47.303745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.303760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.303807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.303817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.471 [2024-11-18 13:34:47.303825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.303833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.303848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.303857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.471 [2024-11-18 13:34:47.303865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.303872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.317604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.317658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.471 [2024-11-18 13:34:47.317671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.317680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.328739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.328798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.471 [2024-11-18 13:34:47.328811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.328820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.328877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.328894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.471 [2024-11-18 13:34:47.328902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.328911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.328972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.328983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.471 [2024-11-18 13:34:47.328992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.329001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.329083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.329097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.471 [2024-11-18 13:34:47.329108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.329116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.329147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.329158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:51.471 [2024-11-18 13:34:47.329185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.329193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.329239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.329249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.471 [2024-11-18 13:34:47.329260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.329269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.329314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.471 [2024-11-18 13:34:47.329325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.471 [2024-11-18 13:34:47.329334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.471 [2024-11-18 13:34:47.329346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.471 [2024-11-18 13:34:47.329492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.955 ms, result 0 00:21:52.061 00:21:52.061 00:21:52.061 13:34:48 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:52.061 [2024-11-18 13:34:48.150258] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:21:52.061 [2024-11-18 13:34:48.150411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88325 ] 00:21:52.323 [2024-11-18 13:34:48.312511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.323 [2024-11-18 13:34:48.341303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.586 [2024-11-18 13:34:48.456484] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.586 [2024-11-18 13:34:48.456569] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.586 [2024-11-18 13:34:48.617720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.617784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:52.586 [2024-11-18 13:34:48.617804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:52.586 [2024-11-18 13:34:48.617813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.617868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.617880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:52.586 [2024-11-18 13:34:48.617893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:52.586 [2024-11-18 13:34:48.617901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.617927] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:52.586 [2024-11-18 13:34:48.618336] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:52.586 [2024-11-18 13:34:48.618415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.618423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:52.586 [2024-11-18 13:34:48.618432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:21:52.586 [2024-11-18 13:34:48.618443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.620108] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:52.586 [2024-11-18 13:34:48.623750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.623797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:52.586 [2024-11-18 13:34:48.623814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:21:52.586 [2024-11-18 13:34:48.623826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.623898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.623908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:52.586 [2024-11-18 13:34:48.623917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:52.586 [2024-11-18 13:34:48.623925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.631795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.631842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:52.586 [2024-11-18 13:34:48.631860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.828 ms 00:21:52.586 [2024-11-18 13:34:48.631868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.631968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.631987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:52.586 [2024-11-18 13:34:48.631996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:52.586 [2024-11-18 13:34:48.632004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.632057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.586 [2024-11-18 13:34:48.632068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:52.586 [2024-11-18 13:34:48.632080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:52.586 [2024-11-18 13:34:48.632092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.586 [2024-11-18 13:34:48.632121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:52.587 [2024-11-18 13:34:48.634122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.587 [2024-11-18 13:34:48.634160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:52.587 [2024-11-18 13:34:48.634185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:21:52.587 [2024-11-18 13:34:48.634194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.587 [2024-11-18 13:34:48.634228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.587 [2024-11-18 13:34:48.634248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:52.587 [2024-11-18 13:34:48.634256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:52.587 [2024-11-18 13:34:48.634264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.587 [2024-11-18 13:34:48.634290] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:52.587 [2024-11-18 13:34:48.634309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:52.587 [2024-11-18 13:34:48.634346] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:52.587 [2024-11-18 13:34:48.634366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:52.587 [2024-11-18 13:34:48.634475] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:52.587 [2024-11-18 13:34:48.634486] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:52.587 [2024-11-18 13:34:48.634497] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:52.587 [2024-11-18 13:34:48.634512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:52.587 [2024-11-18 13:34:48.634521] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:52.587 [2024-11-18 13:34:48.634530] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:52.587 [2024-11-18 13:34:48.634541] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:52.587 [2024-11-18 13:34:48.634549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:52.587 [2024-11-18 13:34:48.634557] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:52.587 [2024-11-18 13:34:48.634568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.587 [2024-11-18 13:34:48.634576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:52.587 [2024-11-18 13:34:48.634584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:52.587 [2024-11-18 13:34:48.634591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.587 [2024-11-18 13:34:48.634679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.587 [2024-11-18 13:34:48.634701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:52.587 [2024-11-18 13:34:48.634709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:52.587 [2024-11-18 13:34:48.634716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.587 [2024-11-18 13:34:48.634817] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:52.587 [2024-11-18 13:34:48.634837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:52.587 [2024-11-18 13:34:48.634847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.587 [2024-11-18 13:34:48.634861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.587 [2024-11-18 13:34:48.634870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:52.587 [2024-11-18 13:34:48.634886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:52.587 [2024-11-18 13:34:48.634895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:52.587 [2024-11-18 13:34:48.634909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:52.587 [2024-11-18 13:34:48.634918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:52.587 [2024-11-18 13:34:48.634926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.587 [2024-11-18 13:34:48.634933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:52.587 [2024-11-18 13:34:48.634941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:52.587 [2024-11-18 13:34:48.634948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.587 [2024-11-18 13:34:48.634956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:52.587 [2024-11-18 13:34:48.634963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:52.587 [2024-11-18 13:34:48.634971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.587 [2024-11-18 13:34:48.634980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:52.587 [2024-11-18 13:34:48.634988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:52.587 [2024-11-18 13:34:48.634996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:52.587 [2024-11-18 13:34:48.635012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.587 [2024-11-18 13:34:48.635028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:52.587 [2024-11-18 13:34:48.635038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.587 [2024-11-18 13:34:48.635053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:52.587 [2024-11-18 13:34:48.635061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.587 [2024-11-18 13:34:48.635076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:52.587 [2024-11-18 13:34:48.635085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.587 [2024-11-18 13:34:48.635100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:52.587 [2024-11-18 13:34:48.635119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.587 [2024-11-18 13:34:48.635135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:52.587 [2024-11-18 13:34:48.635143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:52.587 [2024-11-18 13:34:48.635150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.587 [2024-11-18 13:34:48.635158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:52.587 [2024-11-18 13:34:48.635181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:52.587 [2024-11-18 13:34:48.635192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:52.587 [2024-11-18 13:34:48.635209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:52.587 [2024-11-18 13:34:48.635217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635225] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:52.587 [2024-11-18 13:34:48.635238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:52.587 [2024-11-18 13:34:48.635249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.587 [2024-11-18 13:34:48.635258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.587 [2024-11-18 13:34:48.635266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:52.587 [2024-11-18 13:34:48.635276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:52.587 [2024-11-18 13:34:48.635284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:52.587 [2024-11-18 13:34:48.635292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:52.587 [2024-11-18 13:34:48.635300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:52.587 [2024-11-18 13:34:48.635308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:52.587 [2024-11-18 13:34:48.635318] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:52.587 [2024-11-18 13:34:48.635329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.587 [2024-11-18 13:34:48.635345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:52.587 [2024-11-18 13:34:48.635353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:52.588 [2024-11-18 13:34:48.635362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:52.588 [2024-11-18 13:34:48.635370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:52.588 [2024-11-18 13:34:48.635378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:52.588 [2024-11-18 13:34:48.635385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:52.588 [2024-11-18 13:34:48.635392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:52.588 [2024-11-18 13:34:48.635399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:52.588 [2024-11-18 13:34:48.635406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:52.588 [2024-11-18 13:34:48.635414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:52.588 [2024-11-18 13:34:48.635421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:52.588 [2024-11-18 13:34:48.635428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:52.588 [2024-11-18 13:34:48.635435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:52.588 [2024-11-18 13:34:48.635442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:52.588 [2024-11-18 13:34:48.635449] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:52.588 [2024-11-18 13:34:48.635458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.588 [2024-11-18 13:34:48.635470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:52.588 [2024-11-18 13:34:48.635477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:52.588 [2024-11-18 13:34:48.635484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:52.588 [2024-11-18 13:34:48.635490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:52.588 [2024-11-18 13:34:48.635498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.635506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:52.588 [2024-11-18 13:34:48.635517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:21:52.588 [2024-11-18 13:34:48.635528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.649277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.649325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.588 [2024-11-18 13:34:48.649336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.704 ms 00:21:52.588 [2024-11-18 13:34:48.649343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.649428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.649437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:52.588 [2024-11-18 13:34:48.649446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:52.588 [2024-11-18 13:34:48.649454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.671944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.672017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.588 [2024-11-18 13:34:48.672039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.429 ms 00:21:52.588 [2024-11-18 13:34:48.672054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.672138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.672157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.588 [2024-11-18 13:34:48.672221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:52.588 [2024-11-18 13:34:48.672235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.672880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.672937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.588 [2024-11-18 13:34:48.672954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:21:52.588 [2024-11-18 13:34:48.672967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.673217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.673236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.588 [2024-11-18 13:34:48.673252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:52.588 [2024-11-18 13:34:48.673265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.682030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.682086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.588 [2024-11-18 13:34:48.682102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.732 ms 00:21:52.588 [2024-11-18 13:34:48.682113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.685933] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:52.588 [2024-11-18 13:34:48.685983] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:52.588 [2024-11-18 13:34:48.685996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.686004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:52.588 [2024-11-18 13:34:48.686013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:21:52.588 [2024-11-18 13:34:48.686020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.701820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.701873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:52.588 [2024-11-18 13:34:48.701885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.746 ms 00:21:52.588 [2024-11-18 13:34:48.701893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.588 [2024-11-18 13:34:48.706252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.588 [2024-11-18 13:34:48.706361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:52.588 [2024-11-18 13:34:48.706394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:21:52.588 [2024-11-18 13:34:48.706417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.710574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.710680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:52.849 [2024-11-18 13:34:48.710709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:21:52.849 [2024-11-18 13:34:48.710731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.711899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.711972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:52.849 [2024-11-18 13:34:48.712001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:21:52.849 [2024-11-18 13:34:48.712040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.736929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.736988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:52.849 [2024-11-18 13:34:48.737002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.818 ms 00:21:52.849 [2024-11-18 13:34:48.737021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.745311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:52.849 [2024-11-18 13:34:48.748748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.748800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:52.849 [2024-11-18 13:34:48.748812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.670 ms 00:21:52.849 [2024-11-18 13:34:48.748820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.749008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.749021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:52.849 [2024-11-18 13:34:48.749030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:21:52.849 [2024-11-18 13:34:48.749038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.750751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.750799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:52.849 [2024-11-18 13:34:48.750816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:21:52.849 [2024-11-18 13:34:48.750825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.750860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.750869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:52.849 [2024-11-18 13:34:48.750878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:52.849 [2024-11-18 13:34:48.750886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.750923] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:52.849 [2024-11-18 13:34:48.750933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.750942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:52.849 [2024-11-18 13:34:48.750950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:52.849 [2024-11-18 13:34:48.750962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.756386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.756436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:52.849 [2024-11-18 13:34:48.756446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.398 ms 00:21:52.849 [2024-11-18 13:34:48.756455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.756547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.849 [2024-11-18 13:34:48.756559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:52.849 [2024-11-18 13:34:48.756568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:52.849 [2024-11-18 13:34:48.756576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.849 [2024-11-18 13:34:48.757744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.565 ms, result 0 00:21:54.238  [2024-11-18T13:34:51.312Z] Copying: 8272/1048576 [kB] (8272 kBps) [2024-11-18T13:34:52.257Z] Copying: 18/1024 [MB] (10 MBps) [2024-11-18T13:34:53.204Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-18T13:34:54.149Z] Copying: 42/1024 [MB] (10 MBps) [2024-11-18T13:34:55.095Z] Copying: 53/1024 [MB] (11 MBps) [2024-11-18T13:34:56.040Z] Copying: 65/1024 [MB] (11 MBps) [2024-11-18T13:34:56.986Z] Copying: 75/1024 [MB] (10 MBps) [2024-11-18T13:34:58.373Z] Copying: 87/1024 [MB] (11 MBps) [2024-11-18T13:34:59.319Z] Copying: 100/1024 [MB] (13 MBps) [2024-11-18T13:35:00.262Z] Copying: 119/1024 [MB] (19 MBps) [2024-11-18T13:35:01.208Z] Copying: 133/1024 [MB] (13 MBps) [2024-11-18T13:35:02.150Z] Copying: 152/1024 [MB] (18 MBps) [2024-11-18T13:35:03.097Z] Copying: 168/1024 [MB] (15 MBps) [2024-11-18T13:35:04.041Z] Copying: 188/1024 [MB] (20 MBps) [2024-11-18T13:35:04.984Z] Copying: 213/1024 [MB] (25 MBps) [2024-11-18T13:35:06.369Z] Copying: 238/1024 [MB] (24 MBps) [2024-11-18T13:35:07.314Z] Copying: 255/1024 [MB] (16 MBps) [2024-11-18T13:35:08.257Z] Copying: 267/1024 [MB] (12 MBps) [2024-11-18T13:35:09.202Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-18T13:35:10.145Z] Copying: 291/1024 [MB] (11 MBps) [2024-11-18T13:35:11.090Z] Copying: 302/1024 [MB] (11 MBps) [2024-11-18T13:35:12.035Z] Copying: 314/1024 [MB] (11 MBps) [2024-11-18T13:35:12.980Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-18T13:35:14.369Z] Copying: 337/1024 [MB] (11 MBps) [2024-11-18T13:35:15.316Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-18T13:35:16.262Z] Copying: 359/1024 [MB] (10 MBps) [2024-11-18T13:35:17.208Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-18T13:35:18.154Z] Copying: 381/1024 [MB] (10 MBps) [2024-11-18T13:35:19.242Z] Copying: 392/1024 [MB] (11 MBps) [2024-11-18T13:35:20.187Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-18T13:35:21.132Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-18T13:35:22.076Z] Copying: 425/1024 [MB] (10 MBps) [2024-11-18T13:35:23.022Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-18T13:35:23.969Z] Copying: 446/1024 [MB] (10 MBps) [2024-11-18T13:35:25.359Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-18T13:35:26.303Z] Copying: 472/1024 [MB] (15 MBps) [2024-11-18T13:35:27.248Z] Copying: 488/1024 [MB] (15 MBps) [2024-11-18T13:35:28.189Z] Copying: 501/1024 [MB] (13 MBps) [2024-11-18T13:35:29.133Z] Copying: 517/1024 [MB] (16 MBps) [2024-11-18T13:35:30.079Z] Copying: 530/1024 [MB] (13 MBps) [2024-11-18T13:35:31.022Z] Copying: 544/1024 [MB] (13 MBps) [2024-11-18T13:35:31.965Z] Copying: 559/1024 [MB] (14 MBps) [2024-11-18T13:35:33.354Z] Copying: 578/1024 [MB] (19 MBps) [2024-11-18T13:35:34.299Z] Copying: 597/1024 [MB] (19 MBps) [2024-11-18T13:35:35.244Z] Copying: 617/1024 [MB] (19 MBps) [2024-11-18T13:35:36.184Z] Copying: 637/1024 [MB] (19 MBps) [2024-11-18T13:35:37.126Z] Copying: 658/1024 [MB] (21 MBps) [2024-11-18T13:35:38.068Z] Copying: 683/1024 [MB] (25 MBps) [2024-11-18T13:35:39.012Z] Copying: 712/1024 [MB] (29 MBps) [2024-11-18T13:35:40.397Z] Copying: 731/1024 [MB] (18 MBps) [2024-11-18T13:35:40.969Z] Copying: 745/1024 [MB] (14 MBps) [2024-11-18T13:35:42.354Z] Copying: 760/1024 [MB] (14 MBps) [2024-11-18T13:35:43.298Z] Copying: 783/1024 [MB] (23 MBps) [2024-11-18T13:35:44.243Z] Copying: 801/1024 [MB] (17 MBps) [2024-11-18T13:35:45.190Z] Copying: 814/1024 [MB] (13 MBps) [2024-11-18T13:35:46.134Z] Copying: 832/1024 [MB] (18 MBps) [2024-11-18T13:35:47.078Z] Copying: 849/1024 [MB] (16 MBps) [2024-11-18T13:35:48.022Z] Copying: 860/1024 [MB] (11 MBps) [2024-11-18T13:35:48.967Z] Copying: 871/1024 [MB] (10 MBps) [2024-11-18T13:35:50.353Z] Copying: 883/1024 [MB] (11 MBps) [2024-11-18T13:35:51.011Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-18T13:35:52.400Z] Copying: 904/1024 [MB] (10 MBps) [2024-11-18T13:35:52.972Z] Copying: 915/1024 [MB] (10 MBps) [2024-11-18T13:35:54.356Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-18T13:35:55.299Z] Copying: 938/1024 [MB] (12 MBps) [2024-11-18T13:35:56.244Z] Copying: 948/1024 [MB] (10 MBps) [2024-11-18T13:35:57.186Z] Copying: 961/1024 [MB] (12 MBps) [2024-11-18T13:35:58.132Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-18T13:35:59.073Z] Copying: 987/1024 [MB] (15 MBps) [2024-11-18T13:35:59.647Z] Copying: 1011/1024 [MB] (24 MBps) [2024-11-18T13:35:59.647Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 13:35:59.540951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.519 [2024-11-18 13:35:59.541048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:03.519 [2024-11-18 13:35:59.541066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:03.519 [2024-11-18 13:35:59.541077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.519 [2024-11-18 13:35:59.541105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:03.519 [2024-11-18 13:35:59.541920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.519 [2024-11-18 13:35:59.541950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:03.519 [2024-11-18 13:35:59.541964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:23:03.519 [2024-11-18 13:35:59.541991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.519 [2024-11-18 13:35:59.542280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.519 [2024-11-18 13:35:59.542293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:03.519 [2024-11-18 13:35:59.542304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:23:03.519 [2024-11-18 13:35:59.542314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.519 [2024-11-18 13:35:59.550246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.519 [2024-11-18 13:35:59.550293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:03.519 [2024-11-18 13:35:59.550307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.911 ms 00:23:03.519 [2024-11-18 13:35:59.550317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.519 [2024-11-18 13:35:59.556820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.519 [2024-11-18 13:35:59.556875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:03.519 [2024-11-18 13:35:59.556886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.445 ms 00:23:03.519 [2024-11-18 13:35:59.556895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.519 [2024-11-18 13:35:59.559546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.519 [2024-11-18 13:35:59.559588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:03.519 [2024-11-18 13:35:59.559599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:23:03.519 [2024-11-18 13:35:59.559606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.519 [2024-11-18 13:35:59.563499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.519 [2024-11-18 13:35:59.563540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:03.519 [2024-11-18 13:35:59.563552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.850 ms 00:23:03.519 [2024-11-18 13:35:59.563560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.093 [2024-11-18 13:35:59.914020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.093 [2024-11-18 13:35:59.914075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:04.093 [2024-11-18 13:35:59.914090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 350.398 ms 00:23:04.093 [2024-11-18 13:35:59.914099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.093 [2024-11-18 13:35:59.916781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.093 [2024-11-18 13:35:59.916835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:04.093 [2024-11-18 13:35:59.916846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:23:04.093 [2024-11-18 13:35:59.916854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.093 [2024-11-18 13:35:59.919299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.093 [2024-11-18 13:35:59.919335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:04.093 [2024-11-18 13:35:59.919345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:23:04.093 [2024-11-18 13:35:59.919352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.093 [2024-11-18 13:35:59.921671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.093 [2024-11-18 13:35:59.921715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:04.093 [2024-11-18 13:35:59.921726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:23:04.093 [2024-11-18 13:35:59.921733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.093 [2024-11-18 13:35:59.923765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.093 [2024-11-18 13:35:59.923807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:04.093 [2024-11-18 13:35:59.923818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:23:04.093 [2024-11-18 13:35:59.923825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.093 [2024-11-18 13:35:59.923862] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:04.093 [2024-11-18 13:35:59.923878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:04.093 [2024-11-18 13:35:59.923890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.923995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.924002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.924010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:04.093 [2024-11-18 13:35:59.924018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:04.094 [2024-11-18 13:35:59.924816] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:04.094 [2024-11-18 13:35:59.924829] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3cc8cf29-3966-4e6d-a4c5-0d674196dd1d 00:23:04.094 [2024-11-18 13:35:59.924838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:04.094 [2024-11-18 13:35:59.924862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31424 00:23:04.094 [2024-11-18 13:35:59.924881] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30464 00:23:04.094 [2024-11-18 13:35:59.924892] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0315 00:23:04.094 [2024-11-18 13:35:59.924904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:04.094 [2024-11-18 13:35:59.924912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:04.094 [2024-11-18 13:35:59.924920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:04.094 [2024-11-18 13:35:59.924927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:04.095 [2024-11-18 13:35:59.924933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:04.095 [2024-11-18 13:35:59.924941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.095 [2024-11-18 13:35:59.924962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:04.095 [2024-11-18 13:35:59.924970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:23:04.095 [2024-11-18 13:35:59.924982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.927270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.095 [2024-11-18 13:35:59.927302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:04.095 [2024-11-18 13:35:59.927314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.270 ms 00:23:04.095 [2024-11-18 13:35:59.927323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.927441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.095 [2024-11-18 13:35:59.927463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:04.095 [2024-11-18 13:35:59.927473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:04.095 [2024-11-18 13:35:59.927486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.934829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.934874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.095 [2024-11-18 13:35:59.934886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.934894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.934961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.934978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.095 [2024-11-18 13:35:59.934986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.934994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.935040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.935056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.095 [2024-11-18 13:35:59.935073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.935081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.935098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.935111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.095 [2024-11-18 13:35:59.935127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.935135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.949258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.949306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.095 [2024-11-18 13:35:59.949319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.949327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.960680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.960731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.095 [2024-11-18 13:35:59.960744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.960753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.960844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.960857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.095 [2024-11-18 13:35:59.960866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.960874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.960916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.960938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.095 [2024-11-18 13:35:59.960947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.960960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.961047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.961063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.095 [2024-11-18 13:35:59.961076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.961084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.961113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.961122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:04.095 [2024-11-18 13:35:59.961131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.961138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.961202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.961212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.095 [2024-11-18 13:35:59.961223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.961231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.961275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.095 [2024-11-18 13:35:59.961286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.095 [2024-11-18 13:35:59.961295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.095 [2024-11-18 13:35:59.961304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.095 [2024-11-18 13:35:59.961447] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.461 ms, result 0 00:23:04.095 00:23:04.095 00:23:04.095 13:36:00 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:06.640 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85861 00:23:06.641 13:36:02 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85861 ']' 00:23:06.641 13:36:02 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85861 00:23:06.641 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85861) - No such process 00:23:06.641 Process with pid 85861 is not found 00:23:06.641 13:36:02 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 85861 is not found' 00:23:06.641 Remove shared memory files 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:06.641 13:36:02 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:06.641 ************************************ 00:23:06.641 END TEST ftl_restore 00:23:06.641 ************************************ 00:23:06.641 00:23:06.641 real 5m9.757s 00:23:06.641 user 4m57.233s 00:23:06.641 sys 0m12.234s 00:23:06.641 13:36:02 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.641 13:36:02 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:06.641 13:36:02 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:06.641 13:36:02 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:06.641 13:36:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.641 13:36:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:06.641 ************************************ 00:23:06.641 START TEST ftl_dirty_shutdown 00:23:06.641 ************************************ 00:23:06.641 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:06.641 * Looking for test storage... 00:23:06.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:06.641 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:06.641 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:06.641 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:06.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.904 --rc genhtml_branch_coverage=1 00:23:06.904 --rc genhtml_function_coverage=1 00:23:06.904 --rc genhtml_legend=1 00:23:06.904 --rc geninfo_all_blocks=1 00:23:06.904 --rc geninfo_unexecuted_blocks=1 00:23:06.904 00:23:06.904 ' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:06.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.904 --rc genhtml_branch_coverage=1 00:23:06.904 --rc genhtml_function_coverage=1 00:23:06.904 --rc genhtml_legend=1 00:23:06.904 --rc geninfo_all_blocks=1 00:23:06.904 --rc geninfo_unexecuted_blocks=1 00:23:06.904 00:23:06.904 ' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:06.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.904 --rc genhtml_branch_coverage=1 00:23:06.904 --rc genhtml_function_coverage=1 00:23:06.904 --rc genhtml_legend=1 00:23:06.904 --rc geninfo_all_blocks=1 00:23:06.904 --rc geninfo_unexecuted_blocks=1 00:23:06.904 00:23:06.904 ' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:06.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.904 --rc genhtml_branch_coverage=1 00:23:06.904 --rc genhtml_function_coverage=1 00:23:06.904 --rc genhtml_legend=1 00:23:06.904 --rc geninfo_all_blocks=1 00:23:06.904 --rc geninfo_unexecuted_blocks=1 00:23:06.904 00:23:06.904 ' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89160 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89160 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 89160 ']' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.904 13:36:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:06.904 [2024-11-18 13:36:02.906796] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:23:06.904 [2024-11-18 13:36:02.906953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89160 ] 00:23:07.166 [2024-11-18 13:36:03.063043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.166 [2024-11-18 13:36:03.092130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:07.740 13:36:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:08.001 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:08.264 { 00:23:08.264 "name": "nvme0n1", 00:23:08.264 "aliases": [ 00:23:08.264 "c1f8e561-834a-4e51-b7f2-8b6a0be3e875" 00:23:08.264 ], 00:23:08.264 "product_name": "NVMe disk", 00:23:08.264 "block_size": 4096, 00:23:08.264 "num_blocks": 1310720, 00:23:08.264 "uuid": "c1f8e561-834a-4e51-b7f2-8b6a0be3e875", 00:23:08.264 "numa_id": -1, 00:23:08.264 "assigned_rate_limits": { 00:23:08.264 "rw_ios_per_sec": 0, 00:23:08.264 "rw_mbytes_per_sec": 0, 00:23:08.264 "r_mbytes_per_sec": 0, 00:23:08.264 "w_mbytes_per_sec": 0 00:23:08.264 }, 00:23:08.264 "claimed": true, 00:23:08.264 "claim_type": "read_many_write_one", 00:23:08.264 "zoned": false, 00:23:08.264 "supported_io_types": { 00:23:08.264 "read": true, 00:23:08.264 "write": true, 00:23:08.264 "unmap": true, 00:23:08.264 "flush": true, 00:23:08.264 "reset": true, 00:23:08.264 "nvme_admin": true, 00:23:08.264 "nvme_io": true, 00:23:08.264 "nvme_io_md": false, 00:23:08.264 "write_zeroes": true, 00:23:08.264 "zcopy": false, 00:23:08.264 "get_zone_info": false, 00:23:08.264 "zone_management": false, 00:23:08.264 "zone_append": false, 00:23:08.264 "compare": true, 00:23:08.264 "compare_and_write": false, 00:23:08.264 "abort": true, 00:23:08.264 "seek_hole": false, 00:23:08.264 "seek_data": false, 00:23:08.264 "copy": true, 00:23:08.264 "nvme_iov_md": false 00:23:08.264 }, 00:23:08.264 "driver_specific": { 00:23:08.264 "nvme": [ 00:23:08.264 { 00:23:08.264 "pci_address": "0000:00:11.0", 00:23:08.264 "trid": { 00:23:08.264 "trtype": "PCIe", 00:23:08.264 "traddr": "0000:00:11.0" 00:23:08.264 }, 00:23:08.264 "ctrlr_data": { 00:23:08.264 "cntlid": 0, 00:23:08.264 "vendor_id": "0x1b36", 00:23:08.264 "model_number": "QEMU NVMe Ctrl", 00:23:08.264 "serial_number": "12341", 00:23:08.264 "firmware_revision": "8.0.0", 00:23:08.264 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:08.264 "oacs": { 00:23:08.264 "security": 0, 00:23:08.264 "format": 1, 00:23:08.264 "firmware": 0, 00:23:08.264 "ns_manage": 1 00:23:08.264 }, 00:23:08.264 "multi_ctrlr": false, 00:23:08.264 "ana_reporting": false 00:23:08.264 }, 00:23:08.264 "vs": { 00:23:08.264 "nvme_version": "1.4" 00:23:08.264 }, 00:23:08.264 "ns_data": { 00:23:08.264 "id": 1, 00:23:08.264 "can_share": false 00:23:08.264 } 00:23:08.264 } 00:23:08.264 ], 00:23:08.264 "mp_policy": "active_passive" 00:23:08.264 } 00:23:08.264 } 00:23:08.264 ]' 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:08.264 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:08.526 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=fdb9914e-c82d-45cf-9e15-39ac06515d4f 00:23:08.526 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:08.526 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fdb9914e-c82d-45cf-9e15-39ac06515d4f 00:23:08.785 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:09.044 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=fbbacaee-6ae0-43c2-8500-21ed3bb52a94 00:23:09.044 13:36:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fbbacaee-6ae0-43c2-8500-21ed3bb52a94 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:09.302 { 00:23:09.302 "name": "28186778-d044-44da-93b0-3ed1ffd9e494", 00:23:09.302 "aliases": [ 00:23:09.302 "lvs/nvme0n1p0" 00:23:09.302 ], 00:23:09.302 "product_name": "Logical Volume", 00:23:09.302 "block_size": 4096, 00:23:09.302 "num_blocks": 26476544, 00:23:09.302 "uuid": "28186778-d044-44da-93b0-3ed1ffd9e494", 00:23:09.302 "assigned_rate_limits": { 00:23:09.302 "rw_ios_per_sec": 0, 00:23:09.302 "rw_mbytes_per_sec": 0, 00:23:09.302 "r_mbytes_per_sec": 0, 00:23:09.302 "w_mbytes_per_sec": 0 00:23:09.302 }, 00:23:09.302 "claimed": false, 00:23:09.302 "zoned": false, 00:23:09.302 "supported_io_types": { 00:23:09.302 "read": true, 00:23:09.302 "write": true, 00:23:09.302 "unmap": true, 00:23:09.302 "flush": false, 00:23:09.302 "reset": true, 00:23:09.302 "nvme_admin": false, 00:23:09.302 "nvme_io": false, 00:23:09.302 "nvme_io_md": false, 00:23:09.302 "write_zeroes": true, 00:23:09.302 "zcopy": false, 00:23:09.302 "get_zone_info": false, 00:23:09.302 "zone_management": false, 00:23:09.302 "zone_append": false, 00:23:09.302 "compare": false, 00:23:09.302 "compare_and_write": false, 00:23:09.302 "abort": false, 00:23:09.302 "seek_hole": true, 00:23:09.302 "seek_data": true, 00:23:09.302 "copy": false, 00:23:09.302 "nvme_iov_md": false 00:23:09.302 }, 00:23:09.302 "driver_specific": { 00:23:09.302 "lvol": { 00:23:09.302 "lvol_store_uuid": "fbbacaee-6ae0-43c2-8500-21ed3bb52a94", 00:23:09.302 "base_bdev": "nvme0n1", 00:23:09.302 "thin_provision": true, 00:23:09.302 "num_allocated_clusters": 0, 00:23:09.302 "snapshot": false, 00:23:09.302 "clone": false, 00:23:09.302 "esnap_clone": false 00:23:09.302 } 00:23:09.302 } 00:23:09.302 } 00:23:09.302 ]' 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:09.302 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:09.560 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:09.560 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:09.560 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:09.560 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:09.560 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:09.560 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28186778-d044-44da-93b0-3ed1ffd9e494 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:09.819 { 00:23:09.819 "name": "28186778-d044-44da-93b0-3ed1ffd9e494", 00:23:09.819 "aliases": [ 00:23:09.819 "lvs/nvme0n1p0" 00:23:09.819 ], 00:23:09.819 "product_name": "Logical Volume", 00:23:09.819 "block_size": 4096, 00:23:09.819 "num_blocks": 26476544, 00:23:09.819 "uuid": "28186778-d044-44da-93b0-3ed1ffd9e494", 00:23:09.819 "assigned_rate_limits": { 00:23:09.819 "rw_ios_per_sec": 0, 00:23:09.819 "rw_mbytes_per_sec": 0, 00:23:09.819 "r_mbytes_per_sec": 0, 00:23:09.819 "w_mbytes_per_sec": 0 00:23:09.819 }, 00:23:09.819 "claimed": false, 00:23:09.819 "zoned": false, 00:23:09.819 "supported_io_types": { 00:23:09.819 "read": true, 00:23:09.819 "write": true, 00:23:09.819 "unmap": true, 00:23:09.819 "flush": false, 00:23:09.819 "reset": true, 00:23:09.819 "nvme_admin": false, 00:23:09.819 "nvme_io": false, 00:23:09.819 "nvme_io_md": false, 00:23:09.819 "write_zeroes": true, 00:23:09.819 "zcopy": false, 00:23:09.819 "get_zone_info": false, 00:23:09.819 "zone_management": false, 00:23:09.819 "zone_append": false, 00:23:09.819 "compare": false, 00:23:09.819 "compare_and_write": false, 00:23:09.819 "abort": false, 00:23:09.819 "seek_hole": true, 00:23:09.819 "seek_data": true, 00:23:09.819 "copy": false, 00:23:09.819 "nvme_iov_md": false 00:23:09.819 }, 00:23:09.819 "driver_specific": { 00:23:09.819 "lvol": { 00:23:09.819 "lvol_store_uuid": "fbbacaee-6ae0-43c2-8500-21ed3bb52a94", 00:23:09.819 "base_bdev": "nvme0n1", 00:23:09.819 "thin_provision": true, 00:23:09.819 "num_allocated_clusters": 0, 00:23:09.819 "snapshot": false, 00:23:09.819 "clone": false, 00:23:09.819 "esnap_clone": false 00:23:09.819 } 00:23:09.819 } 00:23:09.819 } 00:23:09.819 ]' 00:23:09.819 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:10.078 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:10.078 13:36:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:10.078 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:10.078 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:10.078 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:10.078 13:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:10.078 13:36:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 28186778-d044-44da-93b0-3ed1ffd9e494 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=28186778-d044-44da-93b0-3ed1ffd9e494 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28186778-d044-44da-93b0-3ed1ffd9e494 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:10.336 { 00:23:10.336 "name": "28186778-d044-44da-93b0-3ed1ffd9e494", 00:23:10.336 "aliases": [ 00:23:10.336 "lvs/nvme0n1p0" 00:23:10.336 ], 00:23:10.336 "product_name": "Logical Volume", 00:23:10.336 "block_size": 4096, 00:23:10.336 "num_blocks": 26476544, 00:23:10.336 "uuid": "28186778-d044-44da-93b0-3ed1ffd9e494", 00:23:10.336 "assigned_rate_limits": { 00:23:10.336 "rw_ios_per_sec": 0, 00:23:10.336 "rw_mbytes_per_sec": 0, 00:23:10.336 "r_mbytes_per_sec": 0, 00:23:10.336 "w_mbytes_per_sec": 0 00:23:10.336 }, 00:23:10.336 "claimed": false, 00:23:10.336 "zoned": false, 00:23:10.336 "supported_io_types": { 00:23:10.336 "read": true, 00:23:10.336 "write": true, 00:23:10.336 "unmap": true, 00:23:10.336 "flush": false, 00:23:10.336 "reset": true, 00:23:10.336 "nvme_admin": false, 00:23:10.336 "nvme_io": false, 00:23:10.336 "nvme_io_md": false, 00:23:10.336 "write_zeroes": true, 00:23:10.336 "zcopy": false, 00:23:10.336 "get_zone_info": false, 00:23:10.336 "zone_management": false, 00:23:10.336 "zone_append": false, 00:23:10.336 "compare": false, 00:23:10.336 "compare_and_write": false, 00:23:10.336 "abort": false, 00:23:10.336 "seek_hole": true, 00:23:10.336 "seek_data": true, 00:23:10.336 "copy": false, 00:23:10.336 "nvme_iov_md": false 00:23:10.336 }, 00:23:10.336 "driver_specific": { 00:23:10.336 "lvol": { 00:23:10.336 "lvol_store_uuid": "fbbacaee-6ae0-43c2-8500-21ed3bb52a94", 00:23:10.336 "base_bdev": "nvme0n1", 00:23:10.336 "thin_provision": true, 00:23:10.336 "num_allocated_clusters": 0, 00:23:10.336 "snapshot": false, 00:23:10.336 "clone": false, 00:23:10.336 "esnap_clone": false 00:23:10.336 } 00:23:10.336 } 00:23:10.336 } 00:23:10.336 ]' 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:10.336 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 28186778-d044-44da-93b0-3ed1ffd9e494 --l2p_dram_limit 10' 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:10.595 13:36:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 28186778-d044-44da-93b0-3ed1ffd9e494 --l2p_dram_limit 10 -c nvc0n1p0 00:23:10.595 [2024-11-18 13:36:06.658304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.658343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:10.595 [2024-11-18 13:36:06.658354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:10.595 [2024-11-18 13:36:06.658362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.658404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.658412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.595 [2024-11-18 13:36:06.658420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:10.595 [2024-11-18 13:36:06.658429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.658449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:10.595 [2024-11-18 13:36:06.658664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:10.595 [2024-11-18 13:36:06.658677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.658685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.595 [2024-11-18 13:36:06.658694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:23:10.595 [2024-11-18 13:36:06.658701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.658723] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3ca886ed-2023-41e1-804f-2576e41e52db 00:23:10.595 [2024-11-18 13:36:06.659785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.659809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:10.595 [2024-11-18 13:36:06.659824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:23:10.595 [2024-11-18 13:36:06.659830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.664518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.664546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:10.595 [2024-11-18 13:36:06.664558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.629 ms 00:23:10.595 [2024-11-18 13:36:06.664568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.664627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.664639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:10.595 [2024-11-18 13:36:06.664649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:10.595 [2024-11-18 13:36:06.664654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.664690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.664699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:10.595 [2024-11-18 13:36:06.664706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:10.595 [2024-11-18 13:36:06.664712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.664730] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:10.595 [2024-11-18 13:36:06.665982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.666013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:10.595 [2024-11-18 13:36:06.666021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:23:10.595 [2024-11-18 13:36:06.666030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.666055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.595 [2024-11-18 13:36:06.666063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:10.595 [2024-11-18 13:36:06.666069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:10.595 [2024-11-18 13:36:06.666077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.595 [2024-11-18 13:36:06.666090] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:10.596 [2024-11-18 13:36:06.666211] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:10.596 [2024-11-18 13:36:06.666221] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:10.596 [2024-11-18 13:36:06.666237] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:10.596 [2024-11-18 13:36:06.666245] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666256] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666263] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:10.596 [2024-11-18 13:36:06.666272] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:10.596 [2024-11-18 13:36:06.666278] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:10.596 [2024-11-18 13:36:06.666285] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:10.596 [2024-11-18 13:36:06.666291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.596 [2024-11-18 13:36:06.666297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:10.596 [2024-11-18 13:36:06.666303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:23:10.596 [2024-11-18 13:36:06.666310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.596 [2024-11-18 13:36:06.666374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.596 [2024-11-18 13:36:06.666385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:10.596 [2024-11-18 13:36:06.666391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:10.596 [2024-11-18 13:36:06.666397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.596 [2024-11-18 13:36:06.666471] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:10.596 [2024-11-18 13:36:06.666480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:10.596 [2024-11-18 13:36:06.666489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:10.596 [2024-11-18 13:36:06.666511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:10.596 [2024-11-18 13:36:06.666528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.596 [2024-11-18 13:36:06.666540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:10.596 [2024-11-18 13:36:06.666546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:10.596 [2024-11-18 13:36:06.666552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.596 [2024-11-18 13:36:06.666560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:10.596 [2024-11-18 13:36:06.666565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:10.596 [2024-11-18 13:36:06.666573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:10.596 [2024-11-18 13:36:06.666587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:10.596 [2024-11-18 13:36:06.666603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:10.596 [2024-11-18 13:36:06.666621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:10.596 [2024-11-18 13:36:06.666640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:10.596 [2024-11-18 13:36:06.666660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:10.596 [2024-11-18 13:36:06.666681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.596 [2024-11-18 13:36:06.666694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:10.596 [2024-11-18 13:36:06.666701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:10.596 [2024-11-18 13:36:06.666707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.596 [2024-11-18 13:36:06.666714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:10.596 [2024-11-18 13:36:06.666720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:10.596 [2024-11-18 13:36:06.666727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:10.596 [2024-11-18 13:36:06.666740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:10.596 [2024-11-18 13:36:06.666745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666752] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:10.596 [2024-11-18 13:36:06.666759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:10.596 [2024-11-18 13:36:06.666769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.596 [2024-11-18 13:36:06.666787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:10.596 [2024-11-18 13:36:06.666793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:10.596 [2024-11-18 13:36:06.666799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:10.596 [2024-11-18 13:36:06.666807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:10.596 [2024-11-18 13:36:06.666813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:10.596 [2024-11-18 13:36:06.666819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:10.596 [2024-11-18 13:36:06.666829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:10.596 [2024-11-18 13:36:06.666840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.596 [2024-11-18 13:36:06.666851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:10.596 [2024-11-18 13:36:06.666857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:10.596 [2024-11-18 13:36:06.666865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:10.596 [2024-11-18 13:36:06.666871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:10.596 [2024-11-18 13:36:06.666879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:10.596 [2024-11-18 13:36:06.666885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:10.596 [2024-11-18 13:36:06.666894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:10.596 [2024-11-18 13:36:06.666900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:10.596 [2024-11-18 13:36:06.666908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:10.596 [2024-11-18 13:36:06.666914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:10.596 [2024-11-18 13:36:06.666921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:10.596 [2024-11-18 13:36:06.666927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:10.596 [2024-11-18 13:36:06.666935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:10.596 [2024-11-18 13:36:06.666941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:10.596 [2024-11-18 13:36:06.666948] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:10.597 [2024-11-18 13:36:06.666956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.597 [2024-11-18 13:36:06.666964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:10.597 [2024-11-18 13:36:06.666970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:10.597 [2024-11-18 13:36:06.666979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:10.597 [2024-11-18 13:36:06.666985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:10.597 [2024-11-18 13:36:06.666993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.597 [2024-11-18 13:36:06.666999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:10.597 [2024-11-18 13:36:06.667011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:23:10.597 [2024-11-18 13:36:06.667017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.597 [2024-11-18 13:36:06.667054] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:10.597 [2024-11-18 13:36:06.667062] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:13.895 [2024-11-18 13:36:09.719202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.719291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:13.895 [2024-11-18 13:36:09.719311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3052.128 ms 00:23:13.895 [2024-11-18 13:36:09.719320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.732683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.732739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.895 [2024-11-18 13:36:09.732756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.241 ms 00:23:13.895 [2024-11-18 13:36:09.732765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.732866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.732877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:13.895 [2024-11-18 13:36:09.732889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:13.895 [2024-11-18 13:36:09.732897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.745438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.745488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.895 [2024-11-18 13:36:09.745502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.475 ms 00:23:13.895 [2024-11-18 13:36:09.745510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.745549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.745558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.895 [2024-11-18 13:36:09.745568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:13.895 [2024-11-18 13:36:09.745577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.746147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.746200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.895 [2024-11-18 13:36:09.746215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:23:13.895 [2024-11-18 13:36:09.746224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.746349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.746364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.895 [2024-11-18 13:36:09.746376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:13.895 [2024-11-18 13:36:09.746385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.754534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.754578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.895 [2024-11-18 13:36:09.754591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.124 ms 00:23:13.895 [2024-11-18 13:36:09.754603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.764431] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:13.895 [2024-11-18 13:36:09.768132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.768189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:13.895 [2024-11-18 13:36:09.768200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.462 ms 00:23:13.895 [2024-11-18 13:36:09.768210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.859851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.859932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:13.895 [2024-11-18 13:36:09.859953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.608 ms 00:23:13.895 [2024-11-18 13:36:09.859969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.860213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.860230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:13.895 [2024-11-18 13:36:09.860240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:23:13.895 [2024-11-18 13:36:09.860250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.866319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.866378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:13.895 [2024-11-18 13:36:09.866390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.045 ms 00:23:13.895 [2024-11-18 13:36:09.866405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.871558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.871614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:13.895 [2024-11-18 13:36:09.871625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:23:13.895 [2024-11-18 13:36:09.871635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.871978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.871993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:13.895 [2024-11-18 13:36:09.872003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:23:13.895 [2024-11-18 13:36:09.872015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.913103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.913163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:13.895 [2024-11-18 13:36:09.913193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.050 ms 00:23:13.895 [2024-11-18 13:36:09.913208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.920262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.920352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:13.895 [2024-11-18 13:36:09.920364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:23:13.895 [2024-11-18 13:36:09.920375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.895 [2024-11-18 13:36:09.925668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.895 [2024-11-18 13:36:09.925724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:13.896 [2024-11-18 13:36:09.925735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.244 ms 00:23:13.896 [2024-11-18 13:36:09.925745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.896 [2024-11-18 13:36:09.932147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.896 [2024-11-18 13:36:09.932217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:13.896 [2024-11-18 13:36:09.932229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.353 ms 00:23:13.896 [2024-11-18 13:36:09.932241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.896 [2024-11-18 13:36:09.932294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.896 [2024-11-18 13:36:09.932307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:13.896 [2024-11-18 13:36:09.932317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:13.896 [2024-11-18 13:36:09.932335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.896 [2024-11-18 13:36:09.932410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.896 [2024-11-18 13:36:09.932422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:13.896 [2024-11-18 13:36:09.932432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:13.896 [2024-11-18 13:36:09.932441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.896 [2024-11-18 13:36:09.933587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3274.787 ms, result 0 00:23:13.896 { 00:23:13.896 "name": "ftl0", 00:23:13.896 "uuid": "3ca886ed-2023-41e1-804f-2576e41e52db" 00:23:13.896 } 00:23:13.896 13:36:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:13.896 13:36:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:14.156 13:36:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:14.156 13:36:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:14.156 13:36:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:14.417 /dev/nbd0 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:14.417 1+0 records in 00:23:14.417 1+0 records out 00:23:14.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373446 s, 11.0 MB/s 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:23:14.417 13:36:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:14.417 [2024-11-18 13:36:10.476691] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:23:14.417 [2024-11-18 13:36:10.476873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89296 ] 00:23:14.678 [2024-11-18 13:36:10.639313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.678 [2024-11-18 13:36:10.668435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.618  [2024-11-18T13:36:13.120Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-18T13:36:14.054Z] Copying: 385/1024 [MB] (196 MBps) [2024-11-18T13:36:14.989Z] Copying: 581/1024 [MB] (196 MBps) [2024-11-18T13:36:15.925Z] Copying: 775/1024 [MB] (194 MBps) [2024-11-18T13:36:16.183Z] Copying: 972/1024 [MB] (196 MBps) [2024-11-18T13:36:16.183Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:23:20.055 00:23:20.055 13:36:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:21.953 13:36:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:21.953 [2024-11-18 13:36:18.060601] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:23:21.953 [2024-11-18 13:36:18.060722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89377 ] 00:23:22.211 [2024-11-18 13:36:18.215266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.211 [2024-11-18 13:36:18.232291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.585  [2024-11-18T13:36:20.646Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-18T13:36:21.580Z] Copying: 59/1024 [MB] (29 MBps) [2024-11-18T13:36:22.517Z] Copying: 88/1024 [MB] (29 MBps) [2024-11-18T13:36:23.548Z] Copying: 119/1024 [MB] (30 MBps) [2024-11-18T13:36:24.484Z] Copying: 153/1024 [MB] (33 MBps) [2024-11-18T13:36:25.418Z] Copying: 190/1024 [MB] (37 MBps) [2024-11-18T13:36:26.353Z] Copying: 223/1024 [MB] (33 MBps) [2024-11-18T13:36:27.288Z] Copying: 260/1024 [MB] (36 MBps) [2024-11-18T13:36:28.664Z] Copying: 293/1024 [MB] (32 MBps) [2024-11-18T13:36:29.599Z] Copying: 325/1024 [MB] (31 MBps) [2024-11-18T13:36:30.534Z] Copying: 362/1024 [MB] (37 MBps) [2024-11-18T13:36:31.469Z] Copying: 400/1024 [MB] (37 MBps) [2024-11-18T13:36:32.404Z] Copying: 431/1024 [MB] (31 MBps) [2024-11-18T13:36:33.339Z] Copying: 464/1024 [MB] (32 MBps) [2024-11-18T13:36:34.711Z] Copying: 500/1024 [MB] (36 MBps) [2024-11-18T13:36:35.646Z] Copying: 534/1024 [MB] (33 MBps) [2024-11-18T13:36:36.581Z] Copying: 565/1024 [MB] (31 MBps) [2024-11-18T13:36:37.516Z] Copying: 596/1024 [MB] (30 MBps) [2024-11-18T13:36:38.451Z] Copying: 628/1024 [MB] (32 MBps) [2024-11-18T13:36:39.385Z] Copying: 666/1024 [MB] (38 MBps) [2024-11-18T13:36:40.316Z] Copying: 700/1024 [MB] (33 MBps) [2024-11-18T13:36:41.691Z] Copying: 733/1024 [MB] (33 MBps) [2024-11-18T13:36:42.624Z] Copying: 768/1024 [MB] (35 MBps) [2024-11-18T13:36:43.558Z] Copying: 800/1024 [MB] (31 MBps) [2024-11-18T13:36:44.492Z] Copying: 831/1024 [MB] (31 MBps) [2024-11-18T13:36:45.425Z] Copying: 865/1024 [MB] (33 MBps) [2024-11-18T13:36:46.358Z] Copying: 899/1024 [MB] (34 MBps) [2024-11-18T13:36:47.291Z] Copying: 933/1024 [MB] (34 MBps) [2024-11-18T13:36:48.665Z] Copying: 965/1024 [MB] (32 MBps) [2024-11-18T13:36:48.923Z] Copying: 1003/1024 [MB] (37 MBps) [2024-11-18T13:36:49.183Z] Copying: 1024/1024 [MB] (average 33 MBps) 00:23:53.055 00:23:53.055 13:36:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:53.055 13:36:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:53.316 13:36:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:53.316 [2024-11-18 13:36:49.381637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.316 [2024-11-18 13:36:49.381719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:53.316 [2024-11-18 13:36:49.381740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:53.317 [2024-11-18 13:36:49.381750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.381779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.317 [2024-11-18 13:36:49.382786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.382842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:53.317 [2024-11-18 13:36:49.382855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:23:53.317 [2024-11-18 13:36:49.382870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.386043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.386098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:53.317 [2024-11-18 13:36:49.386110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:23:53.317 [2024-11-18 13:36:49.386121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.406083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.406140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.317 [2024-11-18 13:36:49.406155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.935 ms 00:23:53.317 [2024-11-18 13:36:49.406176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.412407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.412456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:53.317 [2024-11-18 13:36:49.412469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:23:53.317 [2024-11-18 13:36:49.412482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.415568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.415630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.317 [2024-11-18 13:36:49.415641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:23:53.317 [2024-11-18 13:36:49.415652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.423513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.423579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.317 [2024-11-18 13:36:49.423593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.810 ms 00:23:53.317 [2024-11-18 13:36:49.423609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.423808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.423825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.317 [2024-11-18 13:36:49.423835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:23:53.317 [2024-11-18 13:36:49.423847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.427314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.427371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:53.317 [2024-11-18 13:36:49.427382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:23:53.317 [2024-11-18 13:36:49.427394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.430537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.430598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:53.317 [2024-11-18 13:36:49.430608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:23:53.317 [2024-11-18 13:36:49.430619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.433293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.433348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.317 [2024-11-18 13:36:49.433358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:23:53.317 [2024-11-18 13:36:49.433368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.436002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.317 [2024-11-18 13:36:49.436060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.317 [2024-11-18 13:36:49.436070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:23:53.317 [2024-11-18 13:36:49.436080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.317 [2024-11-18 13:36:49.436126] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.317 [2024-11-18 13:36:49.436146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.317 [2024-11-18 13:36:49.436688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.436990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.318 [2024-11-18 13:36:49.437221] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.318 [2024-11-18 13:36:49.437237] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ca886ed-2023-41e1-804f-2576e41e52db 00:23:53.318 [2024-11-18 13:36:49.437252] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:53.318 [2024-11-18 13:36:49.437260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:53.318 [2024-11-18 13:36:49.437270] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:53.318 [2024-11-18 13:36:49.437279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:53.318 [2024-11-18 13:36:49.437291] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.318 [2024-11-18 13:36:49.437301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.318 [2024-11-18 13:36:49.437312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.318 [2024-11-18 13:36:49.437320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.318 [2024-11-18 13:36:49.437329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.318 [2024-11-18 13:36:49.437337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.318 [2024-11-18 13:36:49.437349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.318 [2024-11-18 13:36:49.437359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:23:53.318 [2024-11-18 13:36:49.437378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.318 [2024-11-18 13:36:49.440689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.318 [2024-11-18 13:36:49.440744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.318 [2024-11-18 13:36:49.440757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:23:53.318 [2024-11-18 13:36:49.440769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.318 [2024-11-18 13:36:49.440923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.318 [2024-11-18 13:36:49.440937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.318 [2024-11-18 13:36:49.440952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:23:53.318 [2024-11-18 13:36:49.440962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.452156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.452257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.579 [2024-11-18 13:36:49.452269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.452282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.452362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.452375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.579 [2024-11-18 13:36:49.452387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.452409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.452497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.452516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.579 [2024-11-18 13:36:49.452525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.452537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.452557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.452568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.579 [2024-11-18 13:36:49.452578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.452591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.472725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.472788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.579 [2024-11-18 13:36:49.472801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.472813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.488648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.488711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.579 [2024-11-18 13:36:49.488724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.488739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.488837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.488856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.579 [2024-11-18 13:36:49.488865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.488877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.488936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.488951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.579 [2024-11-18 13:36:49.488961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.488974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.489062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.489077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.579 [2024-11-18 13:36:49.489091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.489103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.489140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.489154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.579 [2024-11-18 13:36:49.489221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.579 [2024-11-18 13:36:49.489235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.579 [2024-11-18 13:36:49.489292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.579 [2024-11-18 13:36:49.489311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.579 [2024-11-18 13:36:49.489320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.580 [2024-11-18 13:36:49.489332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.580 [2024-11-18 13:36:49.489395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.580 [2024-11-18 13:36:49.489425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.580 [2024-11-18 13:36:49.489435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.580 [2024-11-18 13:36:49.489447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.580 [2024-11-18 13:36:49.489632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 107.941 ms, result 0 00:23:53.580 true 00:23:53.580 13:36:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89160 00:23:53.580 13:36:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89160 00:23:53.580 13:36:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:53.580 [2024-11-18 13:36:49.578644] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:23:53.580 [2024-11-18 13:36:49.578802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89712 ] 00:23:53.841 [2024-11-18 13:36:49.732607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.841 [2024-11-18 13:36:49.758948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.785  [2024-11-18T13:36:51.856Z] Copying: 205/1024 [MB] (205 MBps) [2024-11-18T13:36:53.249Z] Copying: 460/1024 [MB] (254 MBps) [2024-11-18T13:36:53.872Z] Copying: 717/1024 [MB] (256 MBps) [2024-11-18T13:36:54.155Z] Copying: 972/1024 [MB] (254 MBps) [2024-11-18T13:36:54.417Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:23:58.289 00:23:58.289 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89160 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:58.289 13:36:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:58.289 [2024-11-18 13:36:54.267961] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:23:58.289 [2024-11-18 13:36:54.268081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89765 ] 00:23:58.550 [2024-11-18 13:36:54.421250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.550 [2024-11-18 13:36:54.443720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.550 [2024-11-18 13:36:54.543188] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.550 [2024-11-18 13:36:54.543248] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.550 [2024-11-18 13:36:54.606033] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:58.550 [2024-11-18 13:36:54.606657] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:58.550 [2024-11-18 13:36:54.607247] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:58.811 [2024-11-18 13:36:54.928457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.811 [2024-11-18 13:36:54.928492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:58.811 [2024-11-18 13:36:54.928504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:58.811 [2024-11-18 13:36:54.928510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.811 [2024-11-18 13:36:54.928548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.811 [2024-11-18 13:36:54.928557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.811 [2024-11-18 13:36:54.928565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:58.811 [2024-11-18 13:36:54.928571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.811 [2024-11-18 13:36:54.928587] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:58.811 [2024-11-18 13:36:54.928768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:58.811 [2024-11-18 13:36:54.928780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.811 [2024-11-18 13:36:54.928786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.811 [2024-11-18 13:36:54.928796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:23:58.811 [2024-11-18 13:36:54.928804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.811 [2024-11-18 13:36:54.930046] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:58.811 [2024-11-18 13:36:54.933032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.811 [2024-11-18 13:36:54.933060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:58.811 [2024-11-18 13:36:54.933068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:23:58.811 [2024-11-18 13:36:54.933074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.811 [2024-11-18 13:36:54.933118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.811 [2024-11-18 13:36:54.933126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:58.811 [2024-11-18 13:36:54.933132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:58.811 [2024-11-18 13:36:54.933138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.939426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.073 [2024-11-18 13:36:54.939451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.073 [2024-11-18 13:36:54.939463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.233 ms 00:23:59.073 [2024-11-18 13:36:54.939469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.939539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.073 [2024-11-18 13:36:54.939549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.073 [2024-11-18 13:36:54.939558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:59.073 [2024-11-18 13:36:54.939568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.939604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.073 [2024-11-18 13:36:54.939613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.073 [2024-11-18 13:36:54.939619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:59.073 [2024-11-18 13:36:54.939625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.939641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.073 [2024-11-18 13:36:54.941189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.073 [2024-11-18 13:36:54.941213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.073 [2024-11-18 13:36:54.941221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:23:59.073 [2024-11-18 13:36:54.941233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.941258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.073 [2024-11-18 13:36:54.941266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.073 [2024-11-18 13:36:54.941272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:59.073 [2024-11-18 13:36:54.941278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.941293] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.073 [2024-11-18 13:36:54.941312] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:59.073 [2024-11-18 13:36:54.941340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.073 [2024-11-18 13:36:54.941358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:59.073 [2024-11-18 13:36:54.941442] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:59.073 [2024-11-18 13:36:54.941451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.073 [2024-11-18 13:36:54.941462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:59.073 [2024-11-18 13:36:54.941471] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.073 [2024-11-18 13:36:54.941478] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.073 [2024-11-18 13:36:54.941487] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:59.073 [2024-11-18 13:36:54.941495] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.073 [2024-11-18 13:36:54.941503] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:59.073 [2024-11-18 13:36:54.941510] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:59.073 [2024-11-18 13:36:54.941518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.073 [2024-11-18 13:36:54.941527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.073 [2024-11-18 13:36:54.941534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:23:59.073 [2024-11-18 13:36:54.941540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.941603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.073 [2024-11-18 13:36:54.941612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.073 [2024-11-18 13:36:54.941618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:59.073 [2024-11-18 13:36:54.941625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.073 [2024-11-18 13:36:54.941701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.073 [2024-11-18 13:36:54.941723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.073 [2024-11-18 13:36:54.941730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.073 [2024-11-18 13:36:54.941736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.073 [2024-11-18 13:36:54.941743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.073 [2024-11-18 13:36:54.941752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.073 [2024-11-18 13:36:54.941758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:59.073 [2024-11-18 13:36:54.941763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.073 [2024-11-18 13:36:54.941769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:59.073 [2024-11-18 13:36:54.941774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.073 [2024-11-18 13:36:54.941779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.073 [2024-11-18 13:36:54.941786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:59.073 [2024-11-18 13:36:54.941792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.074 [2024-11-18 13:36:54.941797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.074 [2024-11-18 13:36:54.941802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:59.074 [2024-11-18 13:36:54.941807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.074 [2024-11-18 13:36:54.941818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:59.074 [2024-11-18 13:36:54.941823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.074 [2024-11-18 13:36:54.941834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.074 [2024-11-18 13:36:54.941851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.074 [2024-11-18 13:36:54.941856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.074 [2024-11-18 13:36:54.941868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.074 [2024-11-18 13:36:54.941874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.074 [2024-11-18 13:36:54.941886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.074 [2024-11-18 13:36:54.941893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.074 [2024-11-18 13:36:54.941905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.074 [2024-11-18 13:36:54.941911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.074 [2024-11-18 13:36:54.941922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.074 [2024-11-18 13:36:54.941928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:59.074 [2024-11-18 13:36:54.941933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.074 [2024-11-18 13:36:54.941942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:59.074 [2024-11-18 13:36:54.941948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:59.074 [2024-11-18 13:36:54.941954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:59.074 [2024-11-18 13:36:54.941965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:59.074 [2024-11-18 13:36:54.941971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.074 [2024-11-18 13:36:54.941978] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.074 [2024-11-18 13:36:54.941986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.074 [2024-11-18 13:36:54.941992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.074 [2024-11-18 13:36:54.941999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.074 [2024-11-18 13:36:54.942005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.074 [2024-11-18 13:36:54.942011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.074 [2024-11-18 13:36:54.942018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.074 [2024-11-18 13:36:54.942024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.074 [2024-11-18 13:36:54.942030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.074 [2024-11-18 13:36:54.942035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.074 [2024-11-18 13:36:54.942044] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.074 [2024-11-18 13:36:54.942052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.074 [2024-11-18 13:36:54.942059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:59.074 [2024-11-18 13:36:54.942067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:59.074 [2024-11-18 13:36:54.942073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:59.074 [2024-11-18 13:36:54.942080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:59.074 [2024-11-18 13:36:54.942086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:59.074 [2024-11-18 13:36:54.942091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:59.074 [2024-11-18 13:36:54.942097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:59.074 [2024-11-18 13:36:54.942104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:59.074 [2024-11-18 13:36:54.942109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:59.074 [2024-11-18 13:36:54.942116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:59.074 [2024-11-18 13:36:54.942122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:59.074 [2024-11-18 13:36:54.942127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:59.074 [2024-11-18 13:36:54.942134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:59.074 [2024-11-18 13:36:54.942141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:59.074 [2024-11-18 13:36:54.942149] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.074 [2024-11-18 13:36:54.942156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.074 [2024-11-18 13:36:54.942175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.074 [2024-11-18 13:36:54.942182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.074 [2024-11-18 13:36:54.942189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.074 [2024-11-18 13:36:54.942195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.074 [2024-11-18 13:36:54.942203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.942211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.074 [2024-11-18 13:36:54.942219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:23:59.074 [2024-11-18 13:36:54.942225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.953412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.953446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.074 [2024-11-18 13:36:54.953455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.154 ms 00:23:59.074 [2024-11-18 13:36:54.953462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.953532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.953541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.074 [2024-11-18 13:36:54.953547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:59.074 [2024-11-18 13:36:54.953554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.970608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.970642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.074 [2024-11-18 13:36:54.970655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.013 ms 00:23:59.074 [2024-11-18 13:36:54.970662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.970698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.970706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.074 [2024-11-18 13:36:54.970713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:59.074 [2024-11-18 13:36:54.970718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.971137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.971194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.074 [2024-11-18 13:36:54.971203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:23:59.074 [2024-11-18 13:36:54.971209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.971324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.971335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.074 [2024-11-18 13:36:54.971342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:59.074 [2024-11-18 13:36:54.971349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.978756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.978799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.074 [2024-11-18 13:36:54.978812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.385 ms 00:23:59.074 [2024-11-18 13:36:54.978828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.074 [2024-11-18 13:36:54.982341] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:59.074 [2024-11-18 13:36:54.982382] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.074 [2024-11-18 13:36:54.982397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.074 [2024-11-18 13:36:54.982408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.074 [2024-11-18 13:36:54.982419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.468 ms 00:23:59.075 [2024-11-18 13:36:54.982429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:54.994221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:54.994248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.075 [2024-11-18 13:36:54.994257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.744 ms 00:23:59.075 [2024-11-18 13:36:54.994268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:54.997089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:54.997198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.075 [2024-11-18 13:36:54.997228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:23:59.075 [2024-11-18 13:36:54.997249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.000625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.000693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.075 [2024-11-18 13:36:55.000717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:23:59.075 [2024-11-18 13:36:55.000736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.001606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.001668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.075 [2024-11-18 13:36:55.001692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:23:59.075 [2024-11-18 13:36:55.001711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.023690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.023731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.075 [2024-11-18 13:36:55.023742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.943 ms 00:23:59.075 [2024-11-18 13:36:55.023750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.031757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.075 [2024-11-18 13:36:55.034853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.034882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.075 [2024-11-18 13:36:55.034894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.065 ms 00:23:59.075 [2024-11-18 13:36:55.034902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.034994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.035005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.075 [2024-11-18 13:36:55.035018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.075 [2024-11-18 13:36:55.035028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.035095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.035106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.075 [2024-11-18 13:36:55.035114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:59.075 [2024-11-18 13:36:55.035125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.035145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.035178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.075 [2024-11-18 13:36:55.035187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:59.075 [2024-11-18 13:36:55.035195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.035233] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.075 [2024-11-18 13:36:55.035243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.035251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.075 [2024-11-18 13:36:55.035258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:59.075 [2024-11-18 13:36:55.035266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.039580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.039616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.075 [2024-11-18 13:36:55.039626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:23:59.075 [2024-11-18 13:36:55.039639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.039716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.075 [2024-11-18 13:36:55.039726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.075 [2024-11-18 13:36:55.039734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:59.075 [2024-11-18 13:36:55.039742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.075 [2024-11-18 13:36:55.040755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.854 ms, result 0 00:24:00.020  [2024-11-18T13:36:57.093Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-18T13:36:58.479Z] Copying: 25/1024 [MB] (10 MBps) [2024-11-18T13:36:59.421Z] Copying: 36/1024 [MB] (11 MBps) [2024-11-18T13:37:00.364Z] Copying: 47/1024 [MB] (11 MBps) [2024-11-18T13:37:01.305Z] Copying: 58/1024 [MB] (11 MBps) [2024-11-18T13:37:02.247Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-18T13:37:03.188Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-18T13:37:04.131Z] Copying: 92/1024 [MB] (11 MBps) [2024-11-18T13:37:05.075Z] Copying: 103/1024 [MB] (11 MBps) [2024-11-18T13:37:06.462Z] Copying: 115/1024 [MB] (11 MBps) [2024-11-18T13:37:07.407Z] Copying: 126/1024 [MB] (11 MBps) [2024-11-18T13:37:08.351Z] Copying: 137/1024 [MB] (11 MBps) [2024-11-18T13:37:09.298Z] Copying: 148/1024 [MB] (11 MBps) [2024-11-18T13:37:10.247Z] Copying: 159/1024 [MB] (10 MBps) [2024-11-18T13:37:11.192Z] Copying: 170/1024 [MB] (11 MBps) [2024-11-18T13:37:12.137Z] Copying: 183/1024 [MB] (13 MBps) [2024-11-18T13:37:13.079Z] Copying: 195/1024 [MB] (11 MBps) [2024-11-18T13:37:14.467Z] Copying: 206/1024 [MB] (11 MBps) [2024-11-18T13:37:15.412Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-18T13:37:16.356Z] Copying: 232456/1048576 [kB] (9976 kBps) [2024-11-18T13:37:17.301Z] Copying: 242464/1048576 [kB] (10008 kBps) [2024-11-18T13:37:18.246Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-18T13:37:19.191Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-18T13:37:20.136Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-18T13:37:21.140Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-18T13:37:22.085Z] Copying: 289/1024 [MB] (10 MBps) [2024-11-18T13:37:23.470Z] Copying: 301/1024 [MB] (11 MBps) [2024-11-18T13:37:24.420Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-18T13:37:25.365Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-18T13:37:26.308Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-18T13:37:27.253Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-18T13:37:28.197Z] Copying: 360/1024 [MB] (19 MBps) [2024-11-18T13:37:29.143Z] Copying: 374/1024 [MB] (13 MBps) [2024-11-18T13:37:30.094Z] Copying: 389/1024 [MB] (15 MBps) [2024-11-18T13:37:31.479Z] Copying: 410/1024 [MB] (20 MBps) [2024-11-18T13:37:32.421Z] Copying: 427/1024 [MB] (17 MBps) [2024-11-18T13:37:33.364Z] Copying: 444/1024 [MB] (17 MBps) [2024-11-18T13:37:34.305Z] Copying: 464/1024 [MB] (20 MBps) [2024-11-18T13:37:35.247Z] Copying: 483/1024 [MB] (18 MBps) [2024-11-18T13:37:36.187Z] Copying: 500/1024 [MB] (17 MBps) [2024-11-18T13:37:37.128Z] Copying: 518/1024 [MB] (17 MBps) [2024-11-18T13:37:38.067Z] Copying: 539/1024 [MB] (21 MBps) [2024-11-18T13:37:39.453Z] Copying: 562/1024 [MB] (23 MBps) [2024-11-18T13:37:40.394Z] Copying: 580/1024 [MB] (17 MBps) [2024-11-18T13:37:41.334Z] Copying: 601/1024 [MB] (21 MBps) [2024-11-18T13:37:42.275Z] Copying: 625/1024 [MB] (24 MBps) [2024-11-18T13:37:43.217Z] Copying: 642/1024 [MB] (16 MBps) [2024-11-18T13:37:44.158Z] Copying: 656/1024 [MB] (14 MBps) [2024-11-18T13:37:45.103Z] Copying: 666/1024 [MB] (10 MBps) [2024-11-18T13:37:46.490Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-18T13:37:47.064Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-18T13:37:48.450Z] Copying: 714752/1048576 [kB] (10224 kBps) [2024-11-18T13:37:49.397Z] Copying: 708/1024 [MB] (10 MBps) [2024-11-18T13:37:50.431Z] Copying: 718/1024 [MB] (10 MBps) [2024-11-18T13:37:51.376Z] Copying: 728/1024 [MB] (10 MBps) [2024-11-18T13:37:52.323Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-18T13:37:53.269Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-18T13:37:54.208Z] Copying: 758/1024 [MB] (10 MBps) [2024-11-18T13:37:55.155Z] Copying: 787/1024 [MB] (28 MBps) [2024-11-18T13:37:56.101Z] Copying: 798/1024 [MB] (11 MBps) [2024-11-18T13:37:57.490Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-18T13:37:58.065Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-18T13:37:59.450Z] Copying: 833/1024 [MB] (13 MBps) [2024-11-18T13:38:00.394Z] Copying: 853/1024 [MB] (20 MBps) [2024-11-18T13:38:01.339Z] Copying: 869/1024 [MB] (16 MBps) [2024-11-18T13:38:02.285Z] Copying: 885/1024 [MB] (15 MBps) [2024-11-18T13:38:03.226Z] Copying: 906/1024 [MB] (21 MBps) [2024-11-18T13:38:04.171Z] Copying: 936/1024 [MB] (29 MBps) [2024-11-18T13:38:05.114Z] Copying: 952/1024 [MB] (16 MBps) [2024-11-18T13:38:06.057Z] Copying: 966/1024 [MB] (13 MBps) [2024-11-18T13:38:07.439Z] Copying: 984/1024 [MB] (18 MBps) [2024-11-18T13:38:08.382Z] Copying: 1004/1024 [MB] (19 MBps) [2024-11-18T13:38:08.645Z] Copying: 1019/1024 [MB] (14 MBps) [2024-11-18T13:38:08.645Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-18 13:38:08.402814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.402875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:12.517 [2024-11-18 13:38:08.402890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:12.517 [2024-11-18 13:38:08.402899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.517 [2024-11-18 13:38:08.402921] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:12.517 [2024-11-18 13:38:08.403732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.403770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:12.517 [2024-11-18 13:38:08.403783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:25:12.517 [2024-11-18 13:38:08.403792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.517 [2024-11-18 13:38:08.406360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.406413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:12.517 [2024-11-18 13:38:08.406425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:25:12.517 [2024-11-18 13:38:08.406433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.517 [2024-11-18 13:38:08.424619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.424664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:12.517 [2024-11-18 13:38:08.424675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.168 ms 00:25:12.517 [2024-11-18 13:38:08.424683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.517 [2024-11-18 13:38:08.430813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.430865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:12.517 [2024-11-18 13:38:08.430877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:25:12.517 [2024-11-18 13:38:08.430884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.517 [2024-11-18 13:38:08.433920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.433978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:12.517 [2024-11-18 13:38:08.433988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:25:12.517 [2024-11-18 13:38:08.433995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.517 [2024-11-18 13:38:08.438490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.438549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:12.517 [2024-11-18 13:38:08.438560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:25:12.517 [2024-11-18 13:38:08.438568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.517 [2024-11-18 13:38:08.441151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.517 [2024-11-18 13:38:08.441209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:12.518 [2024-11-18 13:38:08.441220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.525 ms 00:25:12.518 [2024-11-18 13:38:08.441228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.518 [2024-11-18 13:38:08.444686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.518 [2024-11-18 13:38:08.444732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:12.518 [2024-11-18 13:38:08.444741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.441 ms 00:25:12.518 [2024-11-18 13:38:08.444748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.518 [2024-11-18 13:38:08.447436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.518 [2024-11-18 13:38:08.447481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:12.518 [2024-11-18 13:38:08.447490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:25:12.518 [2024-11-18 13:38:08.447496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.518 [2024-11-18 13:38:08.449027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.518 [2024-11-18 13:38:08.449077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:12.518 [2024-11-18 13:38:08.449087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:25:12.518 [2024-11-18 13:38:08.449094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.518 [2024-11-18 13:38:08.450540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.518 [2024-11-18 13:38:08.450588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:12.518 [2024-11-18 13:38:08.450599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:25:12.518 [2024-11-18 13:38:08.450607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.518 [2024-11-18 13:38:08.450646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:12.518 [2024-11-18 13:38:08.450662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:25:12.518 [2024-11-18 13:38:08.450674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.450993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:12.518 [2024-11-18 13:38:08.451251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:12.519 [2024-11-18 13:38:08.451469] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:12.519 [2024-11-18 13:38:08.451478] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ca886ed-2023-41e1-804f-2576e41e52db 00:25:12.519 [2024-11-18 13:38:08.451487] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:25:12.519 [2024-11-18 13:38:08.451496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:25:12.519 [2024-11-18 13:38:08.451504] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:25:12.519 [2024-11-18 13:38:08.451520] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:25:12.519 [2024-11-18 13:38:08.451532] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:12.519 [2024-11-18 13:38:08.451561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:12.519 [2024-11-18 13:38:08.451570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:12.519 [2024-11-18 13:38:08.451577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:12.519 [2024-11-18 13:38:08.451584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:12.519 [2024-11-18 13:38:08.451591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.519 [2024-11-18 13:38:08.451599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:12.519 [2024-11-18 13:38:08.451607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:25:12.519 [2024-11-18 13:38:08.451615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.453915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.519 [2024-11-18 13:38:08.453958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:12.519 [2024-11-18 13:38:08.453969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:25:12.519 [2024-11-18 13:38:08.453978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.454088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.519 [2024-11-18 13:38:08.454109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:12.519 [2024-11-18 13:38:08.454120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:12.519 [2024-11-18 13:38:08.454129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.461707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.461754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:12.519 [2024-11-18 13:38:08.461766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.461774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.461835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.461844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:12.519 [2024-11-18 13:38:08.461860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.461868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.461912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.461925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:12.519 [2024-11-18 13:38:08.461933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.461941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.461956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.461969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:12.519 [2024-11-18 13:38:08.461977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.461989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.474997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.475046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:12.519 [2024-11-18 13:38:08.475058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.475066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.485103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:12.519 [2024-11-18 13:38:08.485116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.485123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.485286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.519 [2024-11-18 13:38:08.485301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.485310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.485354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.519 [2024-11-18 13:38:08.485362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.485370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.485459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.519 [2024-11-18 13:38:08.485471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.485482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.485519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:12.519 [2024-11-18 13:38:08.485531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.485539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.485587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.519 [2024-11-18 13:38:08.485595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.485606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.519 [2024-11-18 13:38:08.485666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.519 [2024-11-18 13:38:08.485676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.519 [2024-11-18 13:38:08.485685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.519 [2024-11-18 13:38:08.485818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 82.967 ms, result 0 00:25:12.779 00:25:12.779 00:25:12.779 13:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:14.680 13:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:14.680 [2024-11-18 13:38:10.562626] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:25:14.680 [2024-11-18 13:38:10.562906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90543 ] 00:25:14.680 [2024-11-18 13:38:10.721914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.680 [2024-11-18 13:38:10.750691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.942 [2024-11-18 13:38:10.862052] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:14.942 [2024-11-18 13:38:10.862132] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:14.942 [2024-11-18 13:38:11.022456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.942 [2024-11-18 13:38:11.022519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:14.942 [2024-11-18 13:38:11.022538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:14.942 [2024-11-18 13:38:11.022547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.942 [2024-11-18 13:38:11.022608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.942 [2024-11-18 13:38:11.022619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:14.942 [2024-11-18 13:38:11.022629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:14.943 [2024-11-18 13:38:11.022637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.022662] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:14.943 [2024-11-18 13:38:11.022943] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:14.943 [2024-11-18 13:38:11.022963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.022973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:14.943 [2024-11-18 13:38:11.022982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:14.943 [2024-11-18 13:38:11.022992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.024805] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:14.943 [2024-11-18 13:38:11.028517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.028571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:14.943 [2024-11-18 13:38:11.028584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:25:14.943 [2024-11-18 13:38:11.028596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.028678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.028692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:14.943 [2024-11-18 13:38:11.028701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:14.943 [2024-11-18 13:38:11.028709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.037376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.037420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:14.943 [2024-11-18 13:38:11.037438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.616 ms 00:25:14.943 [2024-11-18 13:38:11.037451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.037560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.037572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:14.943 [2024-11-18 13:38:11.037581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:14.943 [2024-11-18 13:38:11.037592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.037647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.037658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:14.943 [2024-11-18 13:38:11.037666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:14.943 [2024-11-18 13:38:11.037674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.037708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:14.943 [2024-11-18 13:38:11.039904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.039943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:14.943 [2024-11-18 13:38:11.039953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:25:14.943 [2024-11-18 13:38:11.039970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.040005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.040014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:14.943 [2024-11-18 13:38:11.040023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:14.943 [2024-11-18 13:38:11.040031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.040069] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:14.943 [2024-11-18 13:38:11.040092] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:14.943 [2024-11-18 13:38:11.040129] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:14.943 [2024-11-18 13:38:11.040146] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:14.943 [2024-11-18 13:38:11.040272] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:14.943 [2024-11-18 13:38:11.040293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:14.943 [2024-11-18 13:38:11.040304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:14.943 [2024-11-18 13:38:11.040319] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040328] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040341] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:14.943 [2024-11-18 13:38:11.040349] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:14.943 [2024-11-18 13:38:11.040357] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:14.943 [2024-11-18 13:38:11.040364] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:14.943 [2024-11-18 13:38:11.040372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.040380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:14.943 [2024-11-18 13:38:11.040388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:25:14.943 [2024-11-18 13:38:11.040398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.040484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.943 [2024-11-18 13:38:11.040504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:14.943 [2024-11-18 13:38:11.040512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:14.943 [2024-11-18 13:38:11.040520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.943 [2024-11-18 13:38:11.040625] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:14.943 [2024-11-18 13:38:11.040637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:14.943 [2024-11-18 13:38:11.040647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:14.943 [2024-11-18 13:38:11.040684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:14.943 [2024-11-18 13:38:11.040709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:14.943 [2024-11-18 13:38:11.040728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:14.943 [2024-11-18 13:38:11.040736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:14.943 [2024-11-18 13:38:11.040743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:14.943 [2024-11-18 13:38:11.040751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:14.943 [2024-11-18 13:38:11.040760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:14.943 [2024-11-18 13:38:11.040767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:14.943 [2024-11-18 13:38:11.040782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:14.943 [2024-11-18 13:38:11.040810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:14.943 [2024-11-18 13:38:11.040835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:14.943 [2024-11-18 13:38:11.040866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:14.943 [2024-11-18 13:38:11.040890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:14.943 [2024-11-18 13:38:11.040906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:14.943 [2024-11-18 13:38:11.040914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:14.943 [2024-11-18 13:38:11.040928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:14.943 [2024-11-18 13:38:11.040936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:14.943 [2024-11-18 13:38:11.040943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:14.943 [2024-11-18 13:38:11.040950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:14.943 [2024-11-18 13:38:11.040957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:14.943 [2024-11-18 13:38:11.040963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:14.943 [2024-11-18 13:38:11.040977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:14.943 [2024-11-18 13:38:11.040987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:14.943 [2024-11-18 13:38:11.040994] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:14.944 [2024-11-18 13:38:11.041002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:14.944 [2024-11-18 13:38:11.041012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:14.944 [2024-11-18 13:38:11.041020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:14.944 [2024-11-18 13:38:11.041028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:14.944 [2024-11-18 13:38:11.041034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:14.944 [2024-11-18 13:38:11.041041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:14.944 [2024-11-18 13:38:11.041048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:14.944 [2024-11-18 13:38:11.041055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:14.944 [2024-11-18 13:38:11.041063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:14.944 [2024-11-18 13:38:11.041072] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:14.944 [2024-11-18 13:38:11.041081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:14.944 [2024-11-18 13:38:11.041090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:14.944 [2024-11-18 13:38:11.041097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:14.944 [2024-11-18 13:38:11.041104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:14.944 [2024-11-18 13:38:11.041115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:14.944 [2024-11-18 13:38:11.041123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:14.944 [2024-11-18 13:38:11.041130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:14.944 [2024-11-18 13:38:11.041137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:14.944 [2024-11-18 13:38:11.041145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:14.944 [2024-11-18 13:38:11.041152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:14.944 [2024-11-18 13:38:11.041160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:14.944 [2024-11-18 13:38:11.041184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:14.944 [2024-11-18 13:38:11.041191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:14.944 [2024-11-18 13:38:11.041199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:14.944 [2024-11-18 13:38:11.041206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:14.944 [2024-11-18 13:38:11.041215] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:14.944 [2024-11-18 13:38:11.041228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:14.944 [2024-11-18 13:38:11.041243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:14.944 [2024-11-18 13:38:11.041251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:14.944 [2024-11-18 13:38:11.041260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:14.944 [2024-11-18 13:38:11.041270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:14.944 [2024-11-18 13:38:11.041278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.944 [2024-11-18 13:38:11.041286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:14.944 [2024-11-18 13:38:11.041294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:25:14.944 [2024-11-18 13:38:11.041302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.944 [2024-11-18 13:38:11.055667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.944 [2024-11-18 13:38:11.055719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:14.944 [2024-11-18 13:38:11.055731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.311 ms 00:25:14.944 [2024-11-18 13:38:11.055739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.944 [2024-11-18 13:38:11.055829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.944 [2024-11-18 13:38:11.055838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:14.944 [2024-11-18 13:38:11.055847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:14.944 [2024-11-18 13:38:11.055859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.076871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.076941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.206 [2024-11-18 13:38:11.076961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.952 ms 00:25:15.206 [2024-11-18 13:38:11.076986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.077050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.077067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.206 [2024-11-18 13:38:11.077087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:15.206 [2024-11-18 13:38:11.077103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.077724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.077780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.206 [2024-11-18 13:38:11.077798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:25:15.206 [2024-11-18 13:38:11.077823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.078029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.078044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.206 [2024-11-18 13:38:11.078057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:25:15.206 [2024-11-18 13:38:11.078069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.087241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.087296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.206 [2024-11-18 13:38:11.087321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.059 ms 00:25:15.206 [2024-11-18 13:38:11.087333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.091113] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:25:15.206 [2024-11-18 13:38:11.091199] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:15.206 [2024-11-18 13:38:11.091213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.091222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:15.206 [2024-11-18 13:38:11.091233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:25:15.206 [2024-11-18 13:38:11.091241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.107292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.107344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:15.206 [2024-11-18 13:38:11.107357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.882 ms 00:25:15.206 [2024-11-18 13:38:11.107365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.110379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.110426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:15.206 [2024-11-18 13:38:11.110437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:25:15.206 [2024-11-18 13:38:11.110444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.113258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.113303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:15.206 [2024-11-18 13:38:11.113313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:25:15.206 [2024-11-18 13:38:11.113320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.113678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.113706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:15.206 [2024-11-18 13:38:11.113715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:25:15.206 [2024-11-18 13:38:11.113724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.137202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.137262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:15.206 [2024-11-18 13:38:11.137276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.454 ms 00:25:15.206 [2024-11-18 13:38:11.137285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.145453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:15.206 [2024-11-18 13:38:11.148803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.148861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:15.206 [2024-11-18 13:38:11.148873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.464 ms 00:25:15.206 [2024-11-18 13:38:11.148884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.148969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.148985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:15.206 [2024-11-18 13:38:11.148995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:15.206 [2024-11-18 13:38:11.149008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.149804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.149851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:15.206 [2024-11-18 13:38:11.149871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:25:15.206 [2024-11-18 13:38:11.149879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.149907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.149916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:15.206 [2024-11-18 13:38:11.149924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:15.206 [2024-11-18 13:38:11.149932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.149972] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:15.206 [2024-11-18 13:38:11.149983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.149993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:15.206 [2024-11-18 13:38:11.150002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:15.206 [2024-11-18 13:38:11.150013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.155452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.155501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:15.206 [2024-11-18 13:38:11.155511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.412 ms 00:25:15.206 [2024-11-18 13:38:11.155526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.155608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.206 [2024-11-18 13:38:11.155619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:15.206 [2024-11-18 13:38:11.155628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:15.206 [2024-11-18 13:38:11.155640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.206 [2024-11-18 13:38:11.157373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.398 ms, result 0 00:25:16.594  [2024-11-18T13:38:13.666Z] Copying: 980/1048576 [kB] (980 kBps) [2024-11-18T13:38:14.606Z] Copying: 1964/1048576 [kB] (984 kBps) [2024-11-18T13:38:15.548Z] Copying: 4956/1048576 [kB] (2992 kBps) [2024-11-18T13:38:16.489Z] Copying: 16/1024 [MB] (12 MBps) [2024-11-18T13:38:17.429Z] Copying: 32/1024 [MB] (16 MBps) [2024-11-18T13:38:18.370Z] Copying: 56/1024 [MB] (23 MBps) [2024-11-18T13:38:19.346Z] Copying: 93/1024 [MB] (36 MBps) [2024-11-18T13:38:20.733Z] Copying: 122/1024 [MB] (29 MBps) [2024-11-18T13:38:21.681Z] Copying: 148/1024 [MB] (26 MBps) [2024-11-18T13:38:22.622Z] Copying: 178/1024 [MB] (30 MBps) [2024-11-18T13:38:23.562Z] Copying: 217/1024 [MB] (38 MBps) [2024-11-18T13:38:24.503Z] Copying: 268/1024 [MB] (50 MBps) [2024-11-18T13:38:25.448Z] Copying: 292/1024 [MB] (23 MBps) [2024-11-18T13:38:26.392Z] Copying: 319/1024 [MB] (26 MBps) [2024-11-18T13:38:27.779Z] Copying: 344/1024 [MB] (25 MBps) [2024-11-18T13:38:28.351Z] Copying: 360/1024 [MB] (15 MBps) [2024-11-18T13:38:29.738Z] Copying: 390/1024 [MB] (29 MBps) [2024-11-18T13:38:30.684Z] Copying: 420/1024 [MB] (30 MBps) [2024-11-18T13:38:31.627Z] Copying: 448/1024 [MB] (27 MBps) [2024-11-18T13:38:32.570Z] Copying: 478/1024 [MB] (30 MBps) [2024-11-18T13:38:33.514Z] Copying: 501/1024 [MB] (22 MBps) [2024-11-18T13:38:34.458Z] Copying: 528/1024 [MB] (27 MBps) [2024-11-18T13:38:35.401Z] Copying: 558/1024 [MB] (29 MBps) [2024-11-18T13:38:36.345Z] Copying: 581/1024 [MB] (23 MBps) [2024-11-18T13:38:37.733Z] Copying: 598/1024 [MB] (16 MBps) [2024-11-18T13:38:38.678Z] Copying: 614/1024 [MB] (16 MBps) [2024-11-18T13:38:39.625Z] Copying: 637/1024 [MB] (22 MBps) [2024-11-18T13:38:40.569Z] Copying: 667/1024 [MB] (30 MBps) [2024-11-18T13:38:41.513Z] Copying: 687/1024 [MB] (20 MBps) [2024-11-18T13:38:42.458Z] Copying: 707/1024 [MB] (20 MBps) [2024-11-18T13:38:43.404Z] Copying: 737/1024 [MB] (30 MBps) [2024-11-18T13:38:44.350Z] Copying: 765/1024 [MB] (27 MBps) [2024-11-18T13:38:45.736Z] Copying: 794/1024 [MB] (28 MBps) [2024-11-18T13:38:46.680Z] Copying: 828/1024 [MB] (34 MBps) [2024-11-18T13:38:47.625Z] Copying: 859/1024 [MB] (30 MBps) [2024-11-18T13:38:48.631Z] Copying: 880/1024 [MB] (21 MBps) [2024-11-18T13:38:49.574Z] Copying: 906/1024 [MB] (25 MBps) [2024-11-18T13:38:50.521Z] Copying: 928/1024 [MB] (22 MBps) [2024-11-18T13:38:51.465Z] Copying: 947/1024 [MB] (18 MBps) [2024-11-18T13:38:52.408Z] Copying: 971/1024 [MB] (24 MBps) [2024-11-18T13:38:53.354Z] Copying: 990/1024 [MB] (18 MBps) [2024-11-18T13:38:53.928Z] Copying: 1013/1024 [MB] (23 MBps) [2024-11-18T13:38:54.191Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-18 13:38:54.105475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.105570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:58.063 [2024-11-18 13:38:54.105592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:58.063 [2024-11-18 13:38:54.105607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.105643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:58.063 [2024-11-18 13:38:54.107111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.107504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:58.063 [2024-11-18 13:38:54.107608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:25:58.063 [2024-11-18 13:38:54.107649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.108050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.108085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:58.063 [2024-11-18 13:38:54.108097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:25:58.063 [2024-11-18 13:38:54.108106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.126672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.126722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:58.063 [2024-11-18 13:38:54.126735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.546 ms 00:25:58.063 [2024-11-18 13:38:54.126768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.133178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.133223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:58.063 [2024-11-18 13:38:54.133236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.377 ms 00:25:58.063 [2024-11-18 13:38:54.133244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.136200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.136248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:58.063 [2024-11-18 13:38:54.136259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:25:58.063 [2024-11-18 13:38:54.136267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.140653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.140705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:58.063 [2024-11-18 13:38:54.140733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:25:58.063 [2024-11-18 13:38:54.140741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.142966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.143010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:58.063 [2024-11-18 13:38:54.143022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:25:58.063 [2024-11-18 13:38:54.143030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.145706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.145770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:58.063 [2024-11-18 13:38:54.145781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:25:58.063 [2024-11-18 13:38:54.145788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.147961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.148008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:58.063 [2024-11-18 13:38:54.148019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:25:58.063 [2024-11-18 13:38:54.148026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.150223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.150271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:58.063 [2024-11-18 13:38:54.150282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:25:58.063 [2024-11-18 13:38:54.150291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.152696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.063 [2024-11-18 13:38:54.152745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:58.063 [2024-11-18 13:38:54.152755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:25:58.063 [2024-11-18 13:38:54.152762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.063 [2024-11-18 13:38:54.152804] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:58.063 [2024-11-18 13:38:54.152820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:58.063 [2024-11-18 13:38:54.152833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:58.063 [2024-11-18 13:38:54.152842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:58.063 [2024-11-18 13:38:54.152851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.152998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:58.064 [2024-11-18 13:38:54.153575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:58.065 [2024-11-18 13:38:54.153648] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:58.065 [2024-11-18 13:38:54.153663] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ca886ed-2023-41e1-804f-2576e41e52db 00:25:58.065 [2024-11-18 13:38:54.153678] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:58.065 [2024-11-18 13:38:54.153685] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:25:58.065 [2024-11-18 13:38:54.153693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:25:58.065 [2024-11-18 13:38:54.153702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:25:58.065 [2024-11-18 13:38:54.153710] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:58.065 [2024-11-18 13:38:54.153719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:58.065 [2024-11-18 13:38:54.153726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:58.065 [2024-11-18 13:38:54.153734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:58.065 [2024-11-18 13:38:54.153741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:58.065 [2024-11-18 13:38:54.153749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.065 [2024-11-18 13:38:54.153763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:58.065 [2024-11-18 13:38:54.153771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:25:58.065 [2024-11-18 13:38:54.153779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.065 [2024-11-18 13:38:54.156322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.065 [2024-11-18 13:38:54.156350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:58.065 [2024-11-18 13:38:54.156362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.524 ms 00:25:58.065 [2024-11-18 13:38:54.156371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.065 [2024-11-18 13:38:54.156523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.065 [2024-11-18 13:38:54.156538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:58.065 [2024-11-18 13:38:54.156557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:25:58.065 [2024-11-18 13:38:54.156569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.065 [2024-11-18 13:38:54.164523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.065 [2024-11-18 13:38:54.164572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:58.065 [2024-11-18 13:38:54.164583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.065 [2024-11-18 13:38:54.164591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.065 [2024-11-18 13:38:54.164653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.065 [2024-11-18 13:38:54.164662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:58.065 [2024-11-18 13:38:54.164682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.065 [2024-11-18 13:38:54.164690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.065 [2024-11-18 13:38:54.164755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.065 [2024-11-18 13:38:54.164765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:58.065 [2024-11-18 13:38:54.164773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.065 [2024-11-18 13:38:54.164781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.065 [2024-11-18 13:38:54.164796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.065 [2024-11-18 13:38:54.164804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:58.065 [2024-11-18 13:38:54.164812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.065 [2024-11-18 13:38:54.164819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.065 [2024-11-18 13:38:54.178322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.065 [2024-11-18 13:38:54.178365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:58.065 [2024-11-18 13:38:54.178377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.065 [2024-11-18 13:38:54.178385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.188970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.327 [2024-11-18 13:38:54.189014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:58.327 [2024-11-18 13:38:54.189026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.327 [2024-11-18 13:38:54.189042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.189088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.327 [2024-11-18 13:38:54.189098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:58.327 [2024-11-18 13:38:54.189107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.327 [2024-11-18 13:38:54.189114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.189148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.327 [2024-11-18 13:38:54.189179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:58.327 [2024-11-18 13:38:54.189188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.327 [2024-11-18 13:38:54.189196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.189272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.327 [2024-11-18 13:38:54.189282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:58.327 [2024-11-18 13:38:54.189291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.327 [2024-11-18 13:38:54.189298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.189331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.327 [2024-11-18 13:38:54.189340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:58.327 [2024-11-18 13:38:54.189348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.327 [2024-11-18 13:38:54.189356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.189400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.327 [2024-11-18 13:38:54.189409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:58.327 [2024-11-18 13:38:54.189417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.327 [2024-11-18 13:38:54.189425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.189472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.327 [2024-11-18 13:38:54.189482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:58.327 [2024-11-18 13:38:54.189490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.327 [2024-11-18 13:38:54.189498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.327 [2024-11-18 13:38:54.189629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.126 ms, result 0 00:25:58.327 00:25:58.327 00:25:58.327 13:38:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:00.872 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:00.872 13:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:00.872 [2024-11-18 13:38:56.438545] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:26:00.872 [2024-11-18 13:38:56.438663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91012 ] 00:26:00.872 [2024-11-18 13:38:56.597710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.872 [2024-11-18 13:38:56.625964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.872 [2024-11-18 13:38:56.740736] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.872 [2024-11-18 13:38:56.740813] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.872 [2024-11-18 13:38:56.902551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.902602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:00.872 [2024-11-18 13:38:56.902617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:00.872 [2024-11-18 13:38:56.902625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.902685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.902697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.872 [2024-11-18 13:38:56.902705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:00.872 [2024-11-18 13:38:56.902714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.902737] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:00.872 [2024-11-18 13:38:56.903396] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:00.872 [2024-11-18 13:38:56.903443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.903453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.872 [2024-11-18 13:38:56.903468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:26:00.872 [2024-11-18 13:38:56.903480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.905275] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:00.872 [2024-11-18 13:38:56.908922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.908964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:00.872 [2024-11-18 13:38:56.908975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.649 ms 00:26:00.872 [2024-11-18 13:38:56.908988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.909060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.909071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:00.872 [2024-11-18 13:38:56.909080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:00.872 [2024-11-18 13:38:56.909087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.916996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.917031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.872 [2024-11-18 13:38:56.917045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.868 ms 00:26:00.872 [2024-11-18 13:38:56.917053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.917153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.917162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.872 [2024-11-18 13:38:56.917190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:00.872 [2024-11-18 13:38:56.917199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.917267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.917278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:00.872 [2024-11-18 13:38:56.917290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:00.872 [2024-11-18 13:38:56.917297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.917328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:00.872 [2024-11-18 13:38:56.919314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.919342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.872 [2024-11-18 13:38:56.919352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.996 ms 00:26:00.872 [2024-11-18 13:38:56.919360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.919393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.872 [2024-11-18 13:38:56.919402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:00.872 [2024-11-18 13:38:56.919415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:00.872 [2024-11-18 13:38:56.919423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.872 [2024-11-18 13:38:56.919447] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:00.873 [2024-11-18 13:38:56.919467] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:00.873 [2024-11-18 13:38:56.919502] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:00.873 [2024-11-18 13:38:56.919518] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:00.873 [2024-11-18 13:38:56.919623] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:00.873 [2024-11-18 13:38:56.919635] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:00.873 [2024-11-18 13:38:56.919645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:00.873 [2024-11-18 13:38:56.919659] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:00.873 [2024-11-18 13:38:56.919669] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:00.873 [2024-11-18 13:38:56.919682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:00.873 [2024-11-18 13:38:56.919690] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:00.873 [2024-11-18 13:38:56.919698] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:00.873 [2024-11-18 13:38:56.919706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:00.873 [2024-11-18 13:38:56.919714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.873 [2024-11-18 13:38:56.919723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:00.873 [2024-11-18 13:38:56.919731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:00.873 [2024-11-18 13:38:56.919739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.873 [2024-11-18 13:38:56.919822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.873 [2024-11-18 13:38:56.919833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:00.873 [2024-11-18 13:38:56.919841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:00.873 [2024-11-18 13:38:56.919848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.873 [2024-11-18 13:38:56.919951] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:00.873 [2024-11-18 13:38:56.919962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:00.873 [2024-11-18 13:38:56.919972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.873 [2024-11-18 13:38:56.919981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.873 [2024-11-18 13:38:56.919990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:00.873 [2024-11-18 13:38:56.920004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:00.873 [2024-11-18 13:38:56.920030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.873 [2024-11-18 13:38:56.920048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:00.873 [2024-11-18 13:38:56.920057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:00.873 [2024-11-18 13:38:56.920064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.873 [2024-11-18 13:38:56.920072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:00.873 [2024-11-18 13:38:56.920081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:00.873 [2024-11-18 13:38:56.920088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:00.873 [2024-11-18 13:38:56.920104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:00.873 [2024-11-18 13:38:56.920130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:00.873 [2024-11-18 13:38:56.920155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:00.873 [2024-11-18 13:38:56.920213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:00.873 [2024-11-18 13:38:56.920239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:00.873 [2024-11-18 13:38:56.920261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.873 [2024-11-18 13:38:56.920277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:00.873 [2024-11-18 13:38:56.920285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:00.873 [2024-11-18 13:38:56.920293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.873 [2024-11-18 13:38:56.920301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:00.873 [2024-11-18 13:38:56.920309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:00.873 [2024-11-18 13:38:56.920316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:00.873 [2024-11-18 13:38:56.920334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:00.873 [2024-11-18 13:38:56.920342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:00.873 [2024-11-18 13:38:56.920358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:00.873 [2024-11-18 13:38:56.920367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.873 [2024-11-18 13:38:56.920383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:00.873 [2024-11-18 13:38:56.920390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:00.873 [2024-11-18 13:38:56.920398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:00.873 [2024-11-18 13:38:56.920405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:00.873 [2024-11-18 13:38:56.920412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:00.873 [2024-11-18 13:38:56.920420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:00.873 [2024-11-18 13:38:56.920429] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:00.873 [2024-11-18 13:38:56.920439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.873 [2024-11-18 13:38:56.920453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:00.873 [2024-11-18 13:38:56.920461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:00.873 [2024-11-18 13:38:56.920470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:00.873 [2024-11-18 13:38:56.920479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:00.873 [2024-11-18 13:38:56.920486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:00.873 [2024-11-18 13:38:56.920493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:00.873 [2024-11-18 13:38:56.920500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:00.873 [2024-11-18 13:38:56.920508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:00.873 [2024-11-18 13:38:56.920515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:00.873 [2024-11-18 13:38:56.920522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:00.873 [2024-11-18 13:38:56.920529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:00.873 [2024-11-18 13:38:56.920536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:00.873 [2024-11-18 13:38:56.920543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:00.873 [2024-11-18 13:38:56.920551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:00.873 [2024-11-18 13:38:56.920558] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:00.873 [2024-11-18 13:38:56.920567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.873 [2024-11-18 13:38:56.920575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.873 [2024-11-18 13:38:56.920583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:00.873 [2024-11-18 13:38:56.920592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:00.873 [2024-11-18 13:38:56.920599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:00.873 [2024-11-18 13:38:56.920607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.920615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:00.874 [2024-11-18 13:38:56.920622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:26:00.874 [2024-11-18 13:38:56.920629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.934269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.934305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:00.874 [2024-11-18 13:38:56.934316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.592 ms 00:26:00.874 [2024-11-18 13:38:56.934325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.934407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.934416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:00.874 [2024-11-18 13:38:56.934424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:00.874 [2024-11-18 13:38:56.934432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.959058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.959120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:00.874 [2024-11-18 13:38:56.959139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.567 ms 00:26:00.874 [2024-11-18 13:38:56.959151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.959251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.959267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:00.874 [2024-11-18 13:38:56.959288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:00.874 [2024-11-18 13:38:56.959300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.959927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.959970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:00.874 [2024-11-18 13:38:56.959988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:26:00.874 [2024-11-18 13:38:56.960001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.960272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.960290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:00.874 [2024-11-18 13:38:56.960304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:26:00.874 [2024-11-18 13:38:56.960316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.968843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.968887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:00.874 [2024-11-18 13:38:56.968903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.496 ms 00:26:00.874 [2024-11-18 13:38:56.968911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.972546] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:00.874 [2024-11-18 13:38:56.972587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:00.874 [2024-11-18 13:38:56.972598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.972607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:00.874 [2024-11-18 13:38:56.972616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:26:00.874 [2024-11-18 13:38:56.972623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.988157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.988207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:00.874 [2024-11-18 13:38:56.988218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.483 ms 00:26:00.874 [2024-11-18 13:38:56.988227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.990804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.990841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:00.874 [2024-11-18 13:38:56.990851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.525 ms 00:26:00.874 [2024-11-18 13:38:56.990859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.993420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.993458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:00.874 [2024-11-18 13:38:56.993476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:26:00.874 [2024-11-18 13:38:56.993484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.874 [2024-11-18 13:38:56.993822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.874 [2024-11-18 13:38:56.993832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:00.874 [2024-11-18 13:38:56.993842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:26:00.874 [2024-11-18 13:38:56.993849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.135 [2024-11-18 13:38:57.016703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.135 [2024-11-18 13:38:57.016759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:01.135 [2024-11-18 13:38:57.016773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.836 ms 00:26:01.135 [2024-11-18 13:38:57.016789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.135 [2024-11-18 13:38:57.025563] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:01.135 [2024-11-18 13:38:57.029037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.135 [2024-11-18 13:38:57.029083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:01.135 [2024-11-18 13:38:57.029094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:26:01.135 [2024-11-18 13:38:57.029103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.135 [2024-11-18 13:38:57.029201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.135 [2024-11-18 13:38:57.029214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:01.135 [2024-11-18 13:38:57.029223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:01.135 [2024-11-18 13:38:57.029232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.135 [2024-11-18 13:38:57.030048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.135 [2024-11-18 13:38:57.030090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:01.135 [2024-11-18 13:38:57.030103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:26:01.135 [2024-11-18 13:38:57.030112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.135 [2024-11-18 13:38:57.030139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.135 [2024-11-18 13:38:57.030154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:01.135 [2024-11-18 13:38:57.030183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:01.136 [2024-11-18 13:38:57.030194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.136 [2024-11-18 13:38:57.030233] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:01.136 [2024-11-18 13:38:57.030244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.136 [2024-11-18 13:38:57.030255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:01.136 [2024-11-18 13:38:57.030264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:01.136 [2024-11-18 13:38:57.030277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.136 [2024-11-18 13:38:57.035659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.136 [2024-11-18 13:38:57.035701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:01.136 [2024-11-18 13:38:57.035712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.359 ms 00:26:01.136 [2024-11-18 13:38:57.035721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.136 [2024-11-18 13:38:57.035807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.136 [2024-11-18 13:38:57.035817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:01.136 [2024-11-18 13:38:57.035827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:01.136 [2024-11-18 13:38:57.035835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.136 [2024-11-18 13:38:57.036990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.971 ms, result 0 00:26:02.521  [2024-11-18T13:38:59.223Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-18T13:39:00.610Z] Copying: 31/1024 [MB] (17 MBps) [2024-11-18T13:39:01.555Z] Copying: 53/1024 [MB] (22 MBps) [2024-11-18T13:39:02.496Z] Copying: 76/1024 [MB] (22 MBps) [2024-11-18T13:39:03.438Z] Copying: 94/1024 [MB] (18 MBps) [2024-11-18T13:39:04.385Z] Copying: 120/1024 [MB] (26 MBps) [2024-11-18T13:39:05.331Z] Copying: 135/1024 [MB] (14 MBps) [2024-11-18T13:39:06.277Z] Copying: 152/1024 [MB] (16 MBps) [2024-11-18T13:39:07.223Z] Copying: 169/1024 [MB] (17 MBps) [2024-11-18T13:39:08.611Z] Copying: 180/1024 [MB] (11 MBps) [2024-11-18T13:39:09.555Z] Copying: 200/1024 [MB] (20 MBps) [2024-11-18T13:39:10.500Z] Copying: 222/1024 [MB] (22 MBps) [2024-11-18T13:39:11.447Z] Copying: 241/1024 [MB] (19 MBps) [2024-11-18T13:39:12.392Z] Copying: 254/1024 [MB] (12 MBps) [2024-11-18T13:39:13.336Z] Copying: 267/1024 [MB] (13 MBps) [2024-11-18T13:39:14.283Z] Copying: 286/1024 [MB] (19 MBps) [2024-11-18T13:39:15.226Z] Copying: 308/1024 [MB] (21 MBps) [2024-11-18T13:39:16.615Z] Copying: 322/1024 [MB] (14 MBps) [2024-11-18T13:39:17.242Z] Copying: 337/1024 [MB] (14 MBps) [2024-11-18T13:39:18.636Z] Copying: 353/1024 [MB] (16 MBps) [2024-11-18T13:39:19.579Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-18T13:39:20.523Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-18T13:39:21.468Z] Copying: 386/1024 [MB] (10 MBps) [2024-11-18T13:39:22.413Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-18T13:39:23.359Z] Copying: 408/1024 [MB] (10 MBps) [2024-11-18T13:39:24.303Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-18T13:39:25.247Z] Copying: 432/1024 [MB] (13 MBps) [2024-11-18T13:39:26.635Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-18T13:39:27.579Z] Copying: 455/1024 [MB] (10 MBps) [2024-11-18T13:39:28.525Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-18T13:39:29.471Z] Copying: 476/1024 [MB] (10 MBps) [2024-11-18T13:39:30.416Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-18T13:39:31.359Z] Copying: 498/1024 [MB] (11 MBps) [2024-11-18T13:39:32.305Z] Copying: 515/1024 [MB] (16 MBps) [2024-11-18T13:39:33.250Z] Copying: 528/1024 [MB] (13 MBps) [2024-11-18T13:39:34.635Z] Copying: 541/1024 [MB] (12 MBps) [2024-11-18T13:39:35.581Z] Copying: 557/1024 [MB] (15 MBps) [2024-11-18T13:39:36.526Z] Copying: 573/1024 [MB] (16 MBps) [2024-11-18T13:39:37.472Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-18T13:39:38.417Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-18T13:39:39.361Z] Copying: 605/1024 [MB] (10 MBps) [2024-11-18T13:39:40.304Z] Copying: 617/1024 [MB] (11 MBps) [2024-11-18T13:39:41.248Z] Copying: 635/1024 [MB] (18 MBps) [2024-11-18T13:39:42.635Z] Copying: 658/1024 [MB] (22 MBps) [2024-11-18T13:39:43.581Z] Copying: 671/1024 [MB] (13 MBps) [2024-11-18T13:39:44.526Z] Copying: 689/1024 [MB] (17 MBps) [2024-11-18T13:39:45.469Z] Copying: 703/1024 [MB] (14 MBps) [2024-11-18T13:39:46.498Z] Copying: 718/1024 [MB] (14 MBps) [2024-11-18T13:39:47.443Z] Copying: 742/1024 [MB] (24 MBps) [2024-11-18T13:39:48.388Z] Copying: 761/1024 [MB] (19 MBps) [2024-11-18T13:39:49.331Z] Copying: 782/1024 [MB] (20 MBps) [2024-11-18T13:39:50.276Z] Copying: 798/1024 [MB] (16 MBps) [2024-11-18T13:39:51.223Z] Copying: 813/1024 [MB] (14 MBps) [2024-11-18T13:39:52.609Z] Copying: 826/1024 [MB] (13 MBps) [2024-11-18T13:39:53.555Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-18T13:39:54.500Z] Copying: 848/1024 [MB] (10 MBps) [2024-11-18T13:39:55.444Z] Copying: 873/1024 [MB] (24 MBps) [2024-11-18T13:39:56.389Z] Copying: 884/1024 [MB] (10 MBps) [2024-11-18T13:39:57.333Z] Copying: 900/1024 [MB] (16 MBps) [2024-11-18T13:39:58.278Z] Copying: 914/1024 [MB] (13 MBps) [2024-11-18T13:39:59.220Z] Copying: 929/1024 [MB] (15 MBps) [2024-11-18T13:40:00.602Z] Copying: 950/1024 [MB] (20 MBps) [2024-11-18T13:40:01.545Z] Copying: 968/1024 [MB] (17 MBps) [2024-11-18T13:40:02.488Z] Copying: 983/1024 [MB] (15 MBps) [2024-11-18T13:40:03.431Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-18T13:40:04.002Z] Copying: 1011/1024 [MB] (16 MBps) [2024-11-18T13:40:04.263Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 13:40:04.141865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.141969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:08.135 [2024-11-18 13:40:04.141989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:08.135 [2024-11-18 13:40:04.142011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.142042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:08.135 [2024-11-18 13:40:04.142876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.142919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:08.135 [2024-11-18 13:40:04.142933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:27:08.135 [2024-11-18 13:40:04.142956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.143293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.143311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:08.135 [2024-11-18 13:40:04.143323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:27:08.135 [2024-11-18 13:40:04.143339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.148604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.148641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:08.135 [2024-11-18 13:40:04.148652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.242 ms 00:27:08.135 [2024-11-18 13:40:04.148662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.154955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.154999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:08.135 [2024-11-18 13:40:04.155009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.262 ms 00:27:08.135 [2024-11-18 13:40:04.155016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.157037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.157094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:08.135 [2024-11-18 13:40:04.157104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:27:08.135 [2024-11-18 13:40:04.157111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.161148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.161215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:08.135 [2024-11-18 13:40:04.161225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:27:08.135 [2024-11-18 13:40:04.161233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.162845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.162886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:08.135 [2024-11-18 13:40:04.162897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:27:08.135 [2024-11-18 13:40:04.162905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.164839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.164887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:08.135 [2024-11-18 13:40:04.164897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:27:08.135 [2024-11-18 13:40:04.164903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.166354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.166396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:08.135 [2024-11-18 13:40:04.166405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:27:08.135 [2024-11-18 13:40:04.166411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.167739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.167786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:08.135 [2024-11-18 13:40:04.167795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:27:08.135 [2024-11-18 13:40:04.167802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.169200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.135 [2024-11-18 13:40:04.169241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:08.135 [2024-11-18 13:40:04.169249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:27:08.135 [2024-11-18 13:40:04.169256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.135 [2024-11-18 13:40:04.169289] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:08.135 [2024-11-18 13:40:04.169303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:08.135 [2024-11-18 13:40:04.169312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:08.135 [2024-11-18 13:40:04.169320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:08.135 [2024-11-18 13:40:04.169509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:08.136 [2024-11-18 13:40:04.169951] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:08.136 [2024-11-18 13:40:04.169958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ca886ed-2023-41e1-804f-2576e41e52db 00:27:08.136 [2024-11-18 13:40:04.169974] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:08.136 [2024-11-18 13:40:04.169981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:08.136 [2024-11-18 13:40:04.169987] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:08.136 [2024-11-18 13:40:04.169995] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:08.136 [2024-11-18 13:40:04.170002] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:08.136 [2024-11-18 13:40:04.170009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:08.136 [2024-11-18 13:40:04.170016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:08.136 [2024-11-18 13:40:04.170022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:08.136 [2024-11-18 13:40:04.170028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:08.136 [2024-11-18 13:40:04.170040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.136 [2024-11-18 13:40:04.170053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:08.136 [2024-11-18 13:40:04.170064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:27:08.136 [2024-11-18 13:40:04.170071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.136 [2024-11-18 13:40:04.172127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.136 [2024-11-18 13:40:04.172159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:08.136 [2024-11-18 13:40:04.172184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:27:08.136 [2024-11-18 13:40:04.172191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.136 [2024-11-18 13:40:04.172308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.136 [2024-11-18 13:40:04.172329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:08.136 [2024-11-18 13:40:04.172338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:08.136 [2024-11-18 13:40:04.172344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.136 [2024-11-18 13:40:04.178609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.178656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:08.137 [2024-11-18 13:40:04.178665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.178675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.178722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.178730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:08.137 [2024-11-18 13:40:04.178741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.178747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.178799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.178807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:08.137 [2024-11-18 13:40:04.178814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.178821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.178835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.178843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:08.137 [2024-11-18 13:40:04.178850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.178856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.190033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.190080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:08.137 [2024-11-18 13:40:04.190090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.190098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.198523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.198564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:08.137 [2024-11-18 13:40:04.198574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.198581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.198619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.198627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:08.137 [2024-11-18 13:40:04.198634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.198640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.198662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.198676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:08.137 [2024-11-18 13:40:04.198683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.198689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.198750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.198759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:08.137 [2024-11-18 13:40:04.198766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.198773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.198797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.198804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:08.137 [2024-11-18 13:40:04.198813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.198820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.198855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.198862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:08.137 [2024-11-18 13:40:04.198869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.198876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.198914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.137 [2024-11-18 13:40:04.198925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:08.137 [2024-11-18 13:40:04.198933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.137 [2024-11-18 13:40:04.198939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.137 [2024-11-18 13:40:04.199043] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.162 ms, result 0 00:27:08.396 00:27:08.396 00:27:08.396 13:40:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:10.938 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:10.938 Process with pid 89160 is not found 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89160 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 89160 ']' 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 89160 00:27:10.938 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89160) - No such process 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 89160 is not found' 00:27:10.938 13:40:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:11.199 Remove shared memory files 00:27:11.199 13:40:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:11.199 13:40:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:11.199 13:40:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:11.199 13:40:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:11.200 13:40:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:11.200 13:40:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:11.200 13:40:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:11.200 ************************************ 00:27:11.200 END TEST ftl_dirty_shutdown 00:27:11.200 ************************************ 00:27:11.200 00:27:11.200 real 4m4.448s 00:27:11.200 user 4m14.758s 00:27:11.200 sys 0m24.396s 00:27:11.200 13:40:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.200 13:40:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:11.200 13:40:07 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:11.200 13:40:07 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:11.200 13:40:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.200 13:40:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:11.200 ************************************ 00:27:11.200 START TEST ftl_upgrade_shutdown 00:27:11.200 ************************************ 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:11.200 * Looking for test storage... 00:27:11.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.200 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.461 13:40:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.462 --rc genhtml_branch_coverage=1 00:27:11.462 --rc genhtml_function_coverage=1 00:27:11.462 --rc genhtml_legend=1 00:27:11.462 --rc geninfo_all_blocks=1 00:27:11.462 --rc geninfo_unexecuted_blocks=1 00:27:11.462 00:27:11.462 ' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.462 --rc genhtml_branch_coverage=1 00:27:11.462 --rc genhtml_function_coverage=1 00:27:11.462 --rc genhtml_legend=1 00:27:11.462 --rc geninfo_all_blocks=1 00:27:11.462 --rc geninfo_unexecuted_blocks=1 00:27:11.462 00:27:11.462 ' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.462 --rc genhtml_branch_coverage=1 00:27:11.462 --rc genhtml_function_coverage=1 00:27:11.462 --rc genhtml_legend=1 00:27:11.462 --rc geninfo_all_blocks=1 00:27:11.462 --rc geninfo_unexecuted_blocks=1 00:27:11.462 00:27:11.462 ' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:11.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.462 --rc genhtml_branch_coverage=1 00:27:11.462 --rc genhtml_function_coverage=1 00:27:11.462 --rc genhtml_legend=1 00:27:11.462 --rc geninfo_all_blocks=1 00:27:11.462 --rc geninfo_unexecuted_blocks=1 00:27:11.462 00:27:11.462 ' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91801 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91801 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91801 ']' 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:11.462 13:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:11.462 [2024-11-18 13:40:07.443711] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:27:11.462 [2024-11-18 13:40:07.444064] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91801 ] 00:27:11.723 [2024-11-18 13:40:07.605082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.723 [2024-11-18 13:40:07.634224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:12.295 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:12.557 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:12.818 { 00:27:12.818 "name": "basen1", 00:27:12.818 "aliases": [ 00:27:12.818 "13bebe46-acee-4b8b-b1f3-a0bae15c0d55" 00:27:12.818 ], 00:27:12.818 "product_name": "NVMe disk", 00:27:12.818 "block_size": 4096, 00:27:12.818 "num_blocks": 1310720, 00:27:12.818 "uuid": "13bebe46-acee-4b8b-b1f3-a0bae15c0d55", 00:27:12.818 "numa_id": -1, 00:27:12.818 "assigned_rate_limits": { 00:27:12.818 "rw_ios_per_sec": 0, 00:27:12.818 "rw_mbytes_per_sec": 0, 00:27:12.818 "r_mbytes_per_sec": 0, 00:27:12.818 "w_mbytes_per_sec": 0 00:27:12.818 }, 00:27:12.818 "claimed": true, 00:27:12.818 "claim_type": "read_many_write_one", 00:27:12.818 "zoned": false, 00:27:12.818 "supported_io_types": { 00:27:12.818 "read": true, 00:27:12.818 "write": true, 00:27:12.818 "unmap": true, 00:27:12.818 "flush": true, 00:27:12.818 "reset": true, 00:27:12.818 "nvme_admin": true, 00:27:12.818 "nvme_io": true, 00:27:12.818 "nvme_io_md": false, 00:27:12.818 "write_zeroes": true, 00:27:12.818 "zcopy": false, 00:27:12.818 "get_zone_info": false, 00:27:12.818 "zone_management": false, 00:27:12.818 "zone_append": false, 00:27:12.818 "compare": true, 00:27:12.818 "compare_and_write": false, 00:27:12.818 "abort": true, 00:27:12.818 "seek_hole": false, 00:27:12.818 "seek_data": false, 00:27:12.818 "copy": true, 00:27:12.818 "nvme_iov_md": false 00:27:12.818 }, 00:27:12.818 "driver_specific": { 00:27:12.818 "nvme": [ 00:27:12.818 { 00:27:12.818 "pci_address": "0000:00:11.0", 00:27:12.818 "trid": { 00:27:12.818 "trtype": "PCIe", 00:27:12.818 "traddr": "0000:00:11.0" 00:27:12.818 }, 00:27:12.818 "ctrlr_data": { 00:27:12.818 "cntlid": 0, 00:27:12.818 "vendor_id": "0x1b36", 00:27:12.818 "model_number": "QEMU NVMe Ctrl", 00:27:12.818 "serial_number": "12341", 00:27:12.818 "firmware_revision": "8.0.0", 00:27:12.818 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:12.818 "oacs": { 00:27:12.818 "security": 0, 00:27:12.818 "format": 1, 00:27:12.818 "firmware": 0, 00:27:12.818 "ns_manage": 1 00:27:12.818 }, 00:27:12.818 "multi_ctrlr": false, 00:27:12.818 "ana_reporting": false 00:27:12.818 }, 00:27:12.818 "vs": { 00:27:12.818 "nvme_version": "1.4" 00:27:12.818 }, 00:27:12.818 "ns_data": { 00:27:12.818 "id": 1, 00:27:12.818 "can_share": false 00:27:12.818 } 00:27:12.818 } 00:27:12.818 ], 00:27:12.818 "mp_policy": "active_passive" 00:27:12.818 } 00:27:12.818 } 00:27:12.818 ]' 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:12.818 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:13.079 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=fbbacaee-6ae0-43c2-8500-21ed3bb52a94 00:27:13.080 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:13.080 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fbbacaee-6ae0-43c2-8500-21ed3bb52a94 00:27:13.341 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:13.601 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=87bbc1bd-e058-4ebd-8503-a060c53a4adb 00:27:13.601 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 87bbc1bd-e058-4ebd-8503-a060c53a4adb 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=923b5d13-810a-4ef4-81c4-e51c7c2e616a 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 923b5d13-810a-4ef4-81c4-e51c7c2e616a ]] 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 923b5d13-810a-4ef4-81c4-e51c7c2e616a 5120 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=923b5d13-810a-4ef4-81c4-e51c7c2e616a 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 923b5d13-810a-4ef4-81c4-e51c7c2e616a 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=923b5d13-810a-4ef4-81c4-e51c7c2e616a 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:13.863 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 923b5d13-810a-4ef4-81c4-e51c7c2e616a 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:14.123 { 00:27:14.123 "name": "923b5d13-810a-4ef4-81c4-e51c7c2e616a", 00:27:14.123 "aliases": [ 00:27:14.123 "lvs/basen1p0" 00:27:14.123 ], 00:27:14.123 "product_name": "Logical Volume", 00:27:14.123 "block_size": 4096, 00:27:14.123 "num_blocks": 5242880, 00:27:14.123 "uuid": "923b5d13-810a-4ef4-81c4-e51c7c2e616a", 00:27:14.123 "assigned_rate_limits": { 00:27:14.123 "rw_ios_per_sec": 0, 00:27:14.123 "rw_mbytes_per_sec": 0, 00:27:14.123 "r_mbytes_per_sec": 0, 00:27:14.123 "w_mbytes_per_sec": 0 00:27:14.123 }, 00:27:14.123 "claimed": false, 00:27:14.123 "zoned": false, 00:27:14.123 "supported_io_types": { 00:27:14.123 "read": true, 00:27:14.123 "write": true, 00:27:14.123 "unmap": true, 00:27:14.123 "flush": false, 00:27:14.123 "reset": true, 00:27:14.123 "nvme_admin": false, 00:27:14.123 "nvme_io": false, 00:27:14.123 "nvme_io_md": false, 00:27:14.123 "write_zeroes": true, 00:27:14.123 "zcopy": false, 00:27:14.123 "get_zone_info": false, 00:27:14.123 "zone_management": false, 00:27:14.123 "zone_append": false, 00:27:14.123 "compare": false, 00:27:14.123 "compare_and_write": false, 00:27:14.123 "abort": false, 00:27:14.123 "seek_hole": true, 00:27:14.123 "seek_data": true, 00:27:14.123 "copy": false, 00:27:14.123 "nvme_iov_md": false 00:27:14.123 }, 00:27:14.123 "driver_specific": { 00:27:14.123 "lvol": { 00:27:14.123 "lvol_store_uuid": "87bbc1bd-e058-4ebd-8503-a060c53a4adb", 00:27:14.123 "base_bdev": "basen1", 00:27:14.123 "thin_provision": true, 00:27:14.123 "num_allocated_clusters": 0, 00:27:14.123 "snapshot": false, 00:27:14.123 "clone": false, 00:27:14.123 "esnap_clone": false 00:27:14.123 } 00:27:14.123 } 00:27:14.123 } 00:27:14.123 ]' 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:14.123 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:14.382 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:14.382 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:14.382 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:14.642 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:14.642 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:14.642 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 923b5d13-810a-4ef4-81c4-e51c7c2e616a -c cachen1p0 --l2p_dram_limit 2 00:27:14.642 [2024-11-18 13:40:10.756378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.756518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:14.642 [2024-11-18 13:40:10.756535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:14.642 [2024-11-18 13:40:10.756543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.756589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.756598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:14.642 [2024-11-18 13:40:10.756607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:14.642 [2024-11-18 13:40:10.756616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.756631] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:14.642 [2024-11-18 13:40:10.756837] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:14.642 [2024-11-18 13:40:10.756853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.756861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:14.642 [2024-11-18 13:40:10.756870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.226 ms 00:27:14.642 [2024-11-18 13:40:10.756878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.756928] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 61d510a8-38b2-4b29-bdfc-ab6d703271c6 00:27:14.642 [2024-11-18 13:40:10.757889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.757909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:14.642 [2024-11-18 13:40:10.757921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:14.642 [2024-11-18 13:40:10.757927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.762544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.762572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:14.642 [2024-11-18 13:40:10.762582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.581 ms 00:27:14.642 [2024-11-18 13:40:10.762587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.762620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.762629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:14.642 [2024-11-18 13:40:10.762637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:14.642 [2024-11-18 13:40:10.762645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.762679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.762686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:14.642 [2024-11-18 13:40:10.762693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:14.642 [2024-11-18 13:40:10.762699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.762716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:14.642 [2024-11-18 13:40:10.764025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.764050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:14.642 [2024-11-18 13:40:10.764057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.314 ms 00:27:14.642 [2024-11-18 13:40:10.764064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.764086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.642 [2024-11-18 13:40:10.764094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:14.642 [2024-11-18 13:40:10.764101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:14.642 [2024-11-18 13:40:10.764109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.642 [2024-11-18 13:40:10.764121] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:14.642 [2024-11-18 13:40:10.764250] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:14.642 [2024-11-18 13:40:10.764260] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:14.642 [2024-11-18 13:40:10.764270] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:14.642 [2024-11-18 13:40:10.764278] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:14.642 [2024-11-18 13:40:10.764289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:14.642 [2024-11-18 13:40:10.764295] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:14.642 [2024-11-18 13:40:10.764312] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:14.642 [2024-11-18 13:40:10.764317] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:14.642 [2024-11-18 13:40:10.764324] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:14.643 [2024-11-18 13:40:10.764330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.643 [2024-11-18 13:40:10.764337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:14.643 [2024-11-18 13:40:10.764343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:27:14.643 [2024-11-18 13:40:10.764350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.643 [2024-11-18 13:40:10.764413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.643 [2024-11-18 13:40:10.764422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:14.643 [2024-11-18 13:40:10.764428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:14.643 [2024-11-18 13:40:10.764435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.643 [2024-11-18 13:40:10.764507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:14.643 [2024-11-18 13:40:10.764515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:14.643 [2024-11-18 13:40:10.764522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:14.643 [2024-11-18 13:40:10.764540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:14.643 [2024-11-18 13:40:10.764552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:14.643 [2024-11-18 13:40:10.764558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:14.643 [2024-11-18 13:40:10.764565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:14.643 [2024-11-18 13:40:10.764580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:14.643 [2024-11-18 13:40:10.764585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:14.643 [2024-11-18 13:40:10.764598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:14.643 [2024-11-18 13:40:10.764604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:14.643 [2024-11-18 13:40:10.764615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:14.643 [2024-11-18 13:40:10.764620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:14.643 [2024-11-18 13:40:10.764631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:14.643 [2024-11-18 13:40:10.764638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:14.643 [2024-11-18 13:40:10.764649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:14.643 [2024-11-18 13:40:10.764654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:14.643 [2024-11-18 13:40:10.764665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:14.643 [2024-11-18 13:40:10.764671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:14.643 [2024-11-18 13:40:10.764685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:14.643 [2024-11-18 13:40:10.764690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:14.643 [2024-11-18 13:40:10.764704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:14.643 [2024-11-18 13:40:10.764710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:14.643 [2024-11-18 13:40:10.764723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:14.643 [2024-11-18 13:40:10.764742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:14.643 [2024-11-18 13:40:10.764761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:14.643 [2024-11-18 13:40:10.764768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:14.643 [2024-11-18 13:40:10.764782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:14.643 [2024-11-18 13:40:10.764793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.643 [2024-11-18 13:40:10.764810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:14.643 [2024-11-18 13:40:10.764816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:14.643 [2024-11-18 13:40:10.764823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:14.643 [2024-11-18 13:40:10.764829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:14.643 [2024-11-18 13:40:10.764835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:14.643 [2024-11-18 13:40:10.764841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:14.643 [2024-11-18 13:40:10.764851] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:14.643 [2024-11-18 13:40:10.764861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:14.643 [2024-11-18 13:40:10.764879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:14.643 [2024-11-18 13:40:10.764900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:14.643 [2024-11-18 13:40:10.764906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:14.643 [2024-11-18 13:40:10.764914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:14.643 [2024-11-18 13:40:10.764921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:14.643 [2024-11-18 13:40:10.764968] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:14.643 [2024-11-18 13:40:10.764975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.643 [2024-11-18 13:40:10.764990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:14.643 [2024-11-18 13:40:10.764998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:14.643 [2024-11-18 13:40:10.765004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:14.643 [2024-11-18 13:40:10.765014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.643 [2024-11-18 13:40:10.765021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:14.643 [2024-11-18 13:40:10.765030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:27:14.643 [2024-11-18 13:40:10.765037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.643 [2024-11-18 13:40:10.765067] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:14.643 [2024-11-18 13:40:10.765075] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:18.843 [2024-11-18 13:40:14.152850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.152947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:18.843 [2024-11-18 13:40:14.152968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3387.758 ms 00:27:18.843 [2024-11-18 13:40:14.152977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.843 [2024-11-18 13:40:14.167094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.167370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:18.843 [2024-11-18 13:40:14.167400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.984 ms 00:27:18.843 [2024-11-18 13:40:14.167409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.843 [2024-11-18 13:40:14.167488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.167498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:18.843 [2024-11-18 13:40:14.167510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:18.843 [2024-11-18 13:40:14.167518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.843 [2024-11-18 13:40:14.179717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.179769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:18.843 [2024-11-18 13:40:14.179787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.122 ms 00:27:18.843 [2024-11-18 13:40:14.179799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.843 [2024-11-18 13:40:14.179839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.179847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:18.843 [2024-11-18 13:40:14.179858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:18.843 [2024-11-18 13:40:14.179867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.843 [2024-11-18 13:40:14.180402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.180431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:18.843 [2024-11-18 13:40:14.180446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.477 ms 00:27:18.843 [2024-11-18 13:40:14.180456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.843 [2024-11-18 13:40:14.180508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.180530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:18.843 [2024-11-18 13:40:14.180542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:18.843 [2024-11-18 13:40:14.180551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.843 [2024-11-18 13:40:14.188443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.843 [2024-11-18 13:40:14.188485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:18.843 [2024-11-18 13:40:14.188499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.866 ms 00:27:18.843 [2024-11-18 13:40:14.188511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.198042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:18.844 [2024-11-18 13:40:14.199278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.199326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:18.844 [2024-11-18 13:40:14.199338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.700 ms 00:27:18.844 [2024-11-18 13:40:14.199348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.224924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.225000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:18.844 [2024-11-18 13:40:14.225021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.543 ms 00:27:18.844 [2024-11-18 13:40:14.225036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.225156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.225205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:18.844 [2024-11-18 13:40:14.225217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:27:18.844 [2024-11-18 13:40:14.225230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.231208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.231405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:18.844 [2024-11-18 13:40:14.231426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.952 ms 00:27:18.844 [2024-11-18 13:40:14.231441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.237365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.237425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:18.844 [2024-11-18 13:40:14.237436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.877 ms 00:27:18.844 [2024-11-18 13:40:14.237446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.237785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.237801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:18.844 [2024-11-18 13:40:14.237811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:27:18.844 [2024-11-18 13:40:14.237831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.278645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.278708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:18.844 [2024-11-18 13:40:14.278721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.790 ms 00:27:18.844 [2024-11-18 13:40:14.278736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.286349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.286408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:18.844 [2024-11-18 13:40:14.286421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.530 ms 00:27:18.844 [2024-11-18 13:40:14.286432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.293195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.293248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:18.844 [2024-11-18 13:40:14.293259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.710 ms 00:27:18.844 [2024-11-18 13:40:14.293292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.300299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.300495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:18.844 [2024-11-18 13:40:14.300513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.954 ms 00:27:18.844 [2024-11-18 13:40:14.300527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.300577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.300590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:18.844 [2024-11-18 13:40:14.300600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:18.844 [2024-11-18 13:40:14.300611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.300685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.844 [2024-11-18 13:40:14.300698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:18.844 [2024-11-18 13:40:14.300707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:18.844 [2024-11-18 13:40:14.300792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.844 [2024-11-18 13:40:14.302385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3545.478 ms, result 0 00:27:18.844 { 00:27:18.844 "name": "ftl", 00:27:18.844 "uuid": "61d510a8-38b2-4b29-bdfc-ab6d703271c6" 00:27:18.844 } 00:27:18.844 13:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:18.844 [2024-11-18 13:40:14.527028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.844 13:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:18.844 13:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:18.844 [2024-11-18 13:40:14.967470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:19.105 13:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:19.105 [2024-11-18 13:40:15.191939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:19.105 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:19.682 Fill FTL, iteration 1 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91923 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91923 /var/tmp/spdk.tgt.sock 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91923 ']' 00:27:19.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:19.682 13:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.683 13:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:19.683 13:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.683 13:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:19.683 [2024-11-18 13:40:15.634511] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:27:19.683 [2024-11-18 13:40:15.634889] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91923 ] 00:27:19.683 [2024-11-18 13:40:15.798046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.038 [2024-11-18 13:40:15.827988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.607 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.607 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:20.608 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:20.866 ftln1 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91923 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91923 ']' 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91923 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91923 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:20.866 killing process with pid 91923 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91923' 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91923 00:27:20.866 13:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91923 00:27:21.124 13:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:21.124 13:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:21.383 [2024-11-18 13:40:17.298116] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:27:21.383 [2024-11-18 13:40:17.298743] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91959 ] 00:27:21.383 [2024-11-18 13:40:17.461649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.383 [2024-11-18 13:40:17.479284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.769  [2024-11-18T13:40:19.834Z] Copying: 180/1024 [MB] (180 MBps) [2024-11-18T13:40:20.768Z] Copying: 392/1024 [MB] (212 MBps) [2024-11-18T13:40:21.703Z] Copying: 650/1024 [MB] (258 MBps) [2024-11-18T13:40:22.271Z] Copying: 906/1024 [MB] (256 MBps) [2024-11-18T13:40:22.271Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:27:26.143 00:27:26.404 Calculate MD5 checksum, iteration 1 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:26.404 13:40:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:26.404 [2024-11-18 13:40:22.342601] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:27:26.404 [2024-11-18 13:40:22.342744] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92014 ] 00:27:26.404 [2024-11-18 13:40:22.502927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.665 [2024-11-18 13:40:22.530946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.610  [2024-11-18T13:40:25.123Z] Copying: 489/1024 [MB] (489 MBps) [2024-11-18T13:40:25.123Z] Copying: 991/1024 [MB] (502 MBps) [2024-11-18T13:40:25.123Z] Copying: 1024/1024 [MB] (average 498 MBps) 00:27:28.995 00:27:28.995 13:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:28.995 13:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=995ac2e07ce8e33c122647c3d30e6a78 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:31.538 Fill FTL, iteration 2 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:31.538 13:40:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:31.538 [2024-11-18 13:40:27.182073] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:27:31.538 [2024-11-18 13:40:27.182212] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92065 ] 00:27:31.538 [2024-11-18 13:40:27.335688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.538 [2024-11-18 13:40:27.351868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.471  [2024-11-18T13:40:29.532Z] Copying: 258/1024 [MB] (258 MBps) [2024-11-18T13:40:30.906Z] Copying: 522/1024 [MB] (264 MBps) [2024-11-18T13:40:31.470Z] Copying: 784/1024 [MB] (262 MBps) [2024-11-18T13:40:31.731Z] Copying: 1024/1024 [MB] (average 262 MBps) 00:27:35.603 00:27:35.603 Calculate MD5 checksum, iteration 2 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:35.603 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:35.603 [2024-11-18 13:40:31.646513] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:27:35.603 [2024-11-18 13:40:31.646816] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92118 ] 00:27:35.864 [2024-11-18 13:40:31.802538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.864 [2024-11-18 13:40:31.832136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.249  [2024-11-18T13:40:33.945Z] Copying: 563/1024 [MB] (563 MBps) [2024-11-18T13:40:40.527Z] Copying: 1024/1024 [MB] (average 594 MBps) 00:27:44.399 00:27:44.399 13:40:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:44.399 13:40:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:45.338 13:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:45.338 13:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f80931dc01970539a8aa4b1de7c7bd31 00:27:45.338 13:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:45.338 13:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:45.338 13:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:45.600 [2024-11-18 13:40:41.642707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.600 [2024-11-18 13:40:41.642762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:45.600 [2024-11-18 13:40:41.642776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:45.600 [2024-11-18 13:40:41.642783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.600 [2024-11-18 13:40:41.642804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.600 [2024-11-18 13:40:41.642811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:45.600 [2024-11-18 13:40:41.642819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:45.600 [2024-11-18 13:40:41.642825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.600 [2024-11-18 13:40:41.642841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.600 [2024-11-18 13:40:41.642852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:45.600 [2024-11-18 13:40:41.642859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:45.600 [2024-11-18 13:40:41.642868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.600 [2024-11-18 13:40:41.642926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.206 ms, result 0 00:27:45.600 true 00:27:45.600 13:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:45.861 { 00:27:45.861 "name": "ftl", 00:27:45.861 "properties": [ 00:27:45.861 { 00:27:45.861 "name": "superblock_version", 00:27:45.861 "value": 5, 00:27:45.861 "read-only": true 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "name": "base_device", 00:27:45.861 "bands": [ 00:27:45.861 { 00:27:45.861 "id": 0, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 1, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 2, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 3, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 4, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 5, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 6, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 7, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 8, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 9, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 10, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 11, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 12, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 13, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 14, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 15, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 16, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 17, 00:27:45.861 "state": "FREE", 00:27:45.861 "validity": 0.0 00:27:45.861 } 00:27:45.861 ], 00:27:45.861 "read-only": true 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "name": "cache_device", 00:27:45.861 "type": "bdev", 00:27:45.861 "chunks": [ 00:27:45.861 { 00:27:45.861 "id": 0, 00:27:45.861 "state": "INACTIVE", 00:27:45.861 "utilization": 0.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 1, 00:27:45.861 "state": "CLOSED", 00:27:45.861 "utilization": 1.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 2, 00:27:45.861 "state": "CLOSED", 00:27:45.861 "utilization": 1.0 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 3, 00:27:45.861 "state": "OPEN", 00:27:45.861 "utilization": 0.001953125 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "id": 4, 00:27:45.861 "state": "OPEN", 00:27:45.861 "utilization": 0.0 00:27:45.861 } 00:27:45.861 ], 00:27:45.861 "read-only": true 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "name": "verbose_mode", 00:27:45.861 "value": true, 00:27:45.861 "unit": "", 00:27:45.861 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:45.861 }, 00:27:45.861 { 00:27:45.861 "name": "prep_upgrade_on_shutdown", 00:27:45.861 "value": false, 00:27:45.861 "unit": "", 00:27:45.861 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:45.861 } 00:27:45.861 ] 00:27:45.861 } 00:27:45.861 13:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:46.122 [2024-11-18 13:40:42.047329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.122 [2024-11-18 13:40:42.047364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:46.122 [2024-11-18 13:40:42.047373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:46.122 [2024-11-18 13:40:42.047380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.122 [2024-11-18 13:40:42.047396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.122 [2024-11-18 13:40:42.047404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:46.122 [2024-11-18 13:40:42.047411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:46.122 [2024-11-18 13:40:42.047416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.122 [2024-11-18 13:40:42.047431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.122 [2024-11-18 13:40:42.047437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:46.122 [2024-11-18 13:40:42.047443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:46.122 [2024-11-18 13:40:42.047449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.122 [2024-11-18 13:40:42.047494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.153 ms, result 0 00:27:46.122 true 00:27:46.122 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:46.122 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:46.122 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:46.383 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:46.383 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:46.383 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:46.383 [2024-11-18 13:40:42.455652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.384 [2024-11-18 13:40:42.455805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:46.384 [2024-11-18 13:40:42.455818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:46.384 [2024-11-18 13:40:42.455825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.384 [2024-11-18 13:40:42.455846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.384 [2024-11-18 13:40:42.455853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:46.384 [2024-11-18 13:40:42.455859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:46.384 [2024-11-18 13:40:42.455865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.384 [2024-11-18 13:40:42.455880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.384 [2024-11-18 13:40:42.455886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:46.384 [2024-11-18 13:40:42.455892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:46.384 [2024-11-18 13:40:42.455897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.384 [2024-11-18 13:40:42.455941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.275 ms, result 0 00:27:46.384 true 00:27:46.384 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:46.645 { 00:27:46.645 "name": "ftl", 00:27:46.645 "properties": [ 00:27:46.645 { 00:27:46.645 "name": "superblock_version", 00:27:46.645 "value": 5, 00:27:46.645 "read-only": true 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "name": "base_device", 00:27:46.645 "bands": [ 00:27:46.645 { 00:27:46.645 "id": 0, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 1, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 2, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 3, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 4, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 5, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 6, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 7, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 8, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 9, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 10, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 11, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 12, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 13, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 14, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 15, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 16, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 17, 00:27:46.645 "state": "FREE", 00:27:46.645 "validity": 0.0 00:27:46.645 } 00:27:46.645 ], 00:27:46.645 "read-only": true 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "name": "cache_device", 00:27:46.645 "type": "bdev", 00:27:46.645 "chunks": [ 00:27:46.645 { 00:27:46.645 "id": 0, 00:27:46.645 "state": "INACTIVE", 00:27:46.645 "utilization": 0.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 1, 00:27:46.645 "state": "CLOSED", 00:27:46.645 "utilization": 1.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 2, 00:27:46.645 "state": "CLOSED", 00:27:46.645 "utilization": 1.0 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 3, 00:27:46.645 "state": "OPEN", 00:27:46.645 "utilization": 0.001953125 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "id": 4, 00:27:46.645 "state": "OPEN", 00:27:46.645 "utilization": 0.0 00:27:46.645 } 00:27:46.645 ], 00:27:46.645 "read-only": true 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "name": "verbose_mode", 00:27:46.645 "value": true, 00:27:46.645 "unit": "", 00:27:46.645 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:46.645 }, 00:27:46.645 { 00:27:46.645 "name": "prep_upgrade_on_shutdown", 00:27:46.645 "value": true, 00:27:46.645 "unit": "", 00:27:46.645 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:46.645 } 00:27:46.645 ] 00:27:46.645 } 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91801 ]] 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91801 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91801 ']' 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91801 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91801 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91801' 00:27:46.645 killing process with pid 91801 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91801 00:27:46.645 13:40:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91801 00:27:46.906 [2024-11-18 13:40:42.820545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:46.906 [2024-11-18 13:40:42.827500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.906 [2024-11-18 13:40:42.827608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:46.906 [2024-11-18 13:40:42.827657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:46.906 [2024-11-18 13:40:42.827677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.906 [2024-11-18 13:40:42.827709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:46.906 [2024-11-18 13:40:42.828262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.906 [2024-11-18 13:40:42.828339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:46.906 [2024-11-18 13:40:42.828357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.523 ms 00:27:46.906 [2024-11-18 13:40:42.828363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.743604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.743650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:56.913 [2024-11-18 13:40:51.743667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8915.188 ms 00:27:56.913 [2024-11-18 13:40:51.743674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.744779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.744797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:56.913 [2024-11-18 13:40:51.744804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.093 ms 00:27:56.913 [2024-11-18 13:40:51.744810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.745674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.745693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:56.913 [2024-11-18 13:40:51.745701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:27:56.913 [2024-11-18 13:40:51.745711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.747004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.747113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:56.913 [2024-11-18 13:40:51.747126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 00:27:56.913 [2024-11-18 13:40:51.747133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.748772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.748802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:56.913 [2024-11-18 13:40:51.748810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.616 ms 00:27:56.913 [2024-11-18 13:40:51.748816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.748870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.748882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:56.913 [2024-11-18 13:40:51.748888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:56.913 [2024-11-18 13:40:51.748894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.749994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.750021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:56.913 [2024-11-18 13:40:51.750028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.082 ms 00:27:56.913 [2024-11-18 13:40:51.750033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.751040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.751130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:56.913 [2024-11-18 13:40:51.751196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.984 ms 00:27:56.913 [2024-11-18 13:40:51.751206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.751928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.751950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:56.913 [2024-11-18 13:40:51.751957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.699 ms 00:27:56.913 [2024-11-18 13:40:51.751963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.752860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.752888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:56.913 [2024-11-18 13:40:51.752895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:27:56.913 [2024-11-18 13:40:51.752900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.752924] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:56.913 [2024-11-18 13:40:51.752934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:56.913 [2024-11-18 13:40:51.752949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:56.913 [2024-11-18 13:40:51.752955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:56.913 [2024-11-18 13:40:51.752961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.752968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.752974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.752980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.752986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.752992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.752998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:56.913 [2024-11-18 13:40:51.753052] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:56.913 [2024-11-18 13:40:51.753058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 61d510a8-38b2-4b29-bdfc-ab6d703271c6 00:27:56.913 [2024-11-18 13:40:51.753064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:56.913 [2024-11-18 13:40:51.753070] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:56.913 [2024-11-18 13:40:51.753075] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:56.913 [2024-11-18 13:40:51.753081] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:56.913 [2024-11-18 13:40:51.753089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:56.913 [2024-11-18 13:40:51.753095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:56.913 [2024-11-18 13:40:51.753100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:56.913 [2024-11-18 13:40:51.753105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:56.913 [2024-11-18 13:40:51.753110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:56.913 [2024-11-18 13:40:51.753117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.753123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:56.913 [2024-11-18 13:40:51.753129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:27:56.913 [2024-11-18 13:40:51.753138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.754514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.754596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:56.913 [2024-11-18 13:40:51.754647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.362 ms 00:27:56.913 [2024-11-18 13:40:51.754665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.754739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.913 [2024-11-18 13:40:51.754959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:56.913 [2024-11-18 13:40:51.755144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:56.913 [2024-11-18 13:40:51.755327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.762969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.913 [2024-11-18 13:40:51.763196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:56.913 [2024-11-18 13:40:51.763309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.913 [2024-11-18 13:40:51.763418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.763501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.913 [2024-11-18 13:40:51.763581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:56.913 [2024-11-18 13:40:51.763666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.913 [2024-11-18 13:40:51.763712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.763857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.913 [2024-11-18 13:40:51.763936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:56.913 [2024-11-18 13:40:51.764048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.913 [2024-11-18 13:40:51.764136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.764281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.913 [2024-11-18 13:40:51.764306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:56.913 [2024-11-18 13:40:51.764323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.913 [2024-11-18 13:40:51.764346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.913 [2024-11-18 13:40:51.774364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.774489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:56.914 [2024-11-18 13:40:51.774542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.774563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.781464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.781584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:56.914 [2024-11-18 13:40:51.781630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.781651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.781722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.781746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:56.914 [2024-11-18 13:40:51.781765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.781789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.781830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.781851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:56.914 [2024-11-18 13:40:51.781871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.782186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.782312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.782379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:56.914 [2024-11-18 13:40:51.782404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.782451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.782515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.782539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:56.914 [2024-11-18 13:40:51.782583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.782605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.782658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.782681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:56.914 [2024-11-18 13:40:51.782700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.782717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.782774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:56.914 [2024-11-18 13:40:51.782798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:56.914 [2024-11-18 13:40:51.782817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:56.914 [2024-11-18 13:40:51.782869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.914 [2024-11-18 13:40:51.783004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8955.447 ms, result 0 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92357 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92357 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92357 ']' 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.499 13:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.499 [2024-11-18 13:40:55.319226] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:27:59.499 [2024-11-18 13:40:55.319615] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92357 ] 00:27:59.499 [2024-11-18 13:40:55.479404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.499 [2024-11-18 13:40:55.508150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.760 [2024-11-18 13:40:55.830817] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:59.760 [2024-11-18 13:40:55.830902] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:00.021 [2024-11-18 13:40:55.984125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.021 [2024-11-18 13:40:55.984203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:00.021 [2024-11-18 13:40:55.984222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:00.021 [2024-11-18 13:40:55.984231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.984293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:55.984304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:00.022 [2024-11-18 13:40:55.984338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:00.022 [2024-11-18 13:40:55.984346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.984373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:00.022 [2024-11-18 13:40:55.984646] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:00.022 [2024-11-18 13:40:55.984666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:55.984674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:00.022 [2024-11-18 13:40:55.984683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:28:00.022 [2024-11-18 13:40:55.984691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.986356] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:00.022 [2024-11-18 13:40:55.990155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:55.990230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:00.022 [2024-11-18 13:40:55.990242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.801 ms 00:28:00.022 [2024-11-18 13:40:55.990250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.990326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:55.990336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:00.022 [2024-11-18 13:40:55.990349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:00.022 [2024-11-18 13:40:55.990356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.998286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:55.998327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:00.022 [2024-11-18 13:40:55.998343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.882 ms 00:28:00.022 [2024-11-18 13:40:55.998351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.998406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:55.998418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:00.022 [2024-11-18 13:40:55.998427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:00.022 [2024-11-18 13:40:55.998434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.998498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:55.998514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:00.022 [2024-11-18 13:40:55.998522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:00.022 [2024-11-18 13:40:55.998532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:55.998556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:00.022 [2024-11-18 13:40:56.000752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:56.000790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:00.022 [2024-11-18 13:40:56.000800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.201 ms 00:28:00.022 [2024-11-18 13:40:56.000808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:56.000836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:56.000849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:00.022 [2024-11-18 13:40:56.000858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:00.022 [2024-11-18 13:40:56.000866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:56.000891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:00.022 [2024-11-18 13:40:56.000911] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:00.022 [2024-11-18 13:40:56.000949] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:00.022 [2024-11-18 13:40:56.000965] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:00.022 [2024-11-18 13:40:56.001076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:00.022 [2024-11-18 13:40:56.001088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:00.022 [2024-11-18 13:40:56.001099] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:00.022 [2024-11-18 13:40:56.001109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001119] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001127] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:00.022 [2024-11-18 13:40:56.001134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:00.022 [2024-11-18 13:40:56.001143] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:00.022 [2024-11-18 13:40:56.001151] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:00.022 [2024-11-18 13:40:56.001159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:56.001208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:00.022 [2024-11-18 13:40:56.001220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:28:00.022 [2024-11-18 13:40:56.001228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:56.001315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.022 [2024-11-18 13:40:56.001324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:00.022 [2024-11-18 13:40:56.001331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:00.022 [2024-11-18 13:40:56.001342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.022 [2024-11-18 13:40:56.001447] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:00.022 [2024-11-18 13:40:56.001458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:00.022 [2024-11-18 13:40:56.001467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:00.022 [2024-11-18 13:40:56.001497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:00.022 [2024-11-18 13:40:56.001513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:00.022 [2024-11-18 13:40:56.001522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:00.022 [2024-11-18 13:40:56.001530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:00.022 [2024-11-18 13:40:56.001545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:00.022 [2024-11-18 13:40:56.001553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:00.022 [2024-11-18 13:40:56.001569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:00.022 [2024-11-18 13:40:56.001589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:00.022 [2024-11-18 13:40:56.001605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:00.022 [2024-11-18 13:40:56.001613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:00.022 [2024-11-18 13:40:56.001635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:00.022 [2024-11-18 13:40:56.001644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:00.022 [2024-11-18 13:40:56.001659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:00.022 [2024-11-18 13:40:56.001667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:00.022 [2024-11-18 13:40:56.001683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:00.022 [2024-11-18 13:40:56.001691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:00.022 [2024-11-18 13:40:56.001706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:00.022 [2024-11-18 13:40:56.001714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:00.022 [2024-11-18 13:40:56.001733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:00.022 [2024-11-18 13:40:56.001741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:00.022 [2024-11-18 13:40:56.001756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:00.022 [2024-11-18 13:40:56.001764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:00.022 [2024-11-18 13:40:56.001779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:00.022 [2024-11-18 13:40:56.001802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:00.022 [2024-11-18 13:40:56.001810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.022 [2024-11-18 13:40:56.001817] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:00.023 [2024-11-18 13:40:56.001826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:00.023 [2024-11-18 13:40:56.001834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:00.023 [2024-11-18 13:40:56.001844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:00.023 [2024-11-18 13:40:56.001855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:00.023 [2024-11-18 13:40:56.001863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:00.023 [2024-11-18 13:40:56.001870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:00.023 [2024-11-18 13:40:56.001877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:00.023 [2024-11-18 13:40:56.001886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:00.023 [2024-11-18 13:40:56.001893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:00.023 [2024-11-18 13:40:56.001901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:00.023 [2024-11-18 13:40:56.001911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.001920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:00.023 [2024-11-18 13:40:56.001927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.001934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.001941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:00.023 [2024-11-18 13:40:56.001948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:00.023 [2024-11-18 13:40:56.001955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:00.023 [2024-11-18 13:40:56.001962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:00.023 [2024-11-18 13:40:56.001969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.001979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.001986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.001993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.002001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.002009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.002016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:00.023 [2024-11-18 13:40:56.002023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:00.023 [2024-11-18 13:40:56.002031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.002040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:00.023 [2024-11-18 13:40:56.002048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:00.023 [2024-11-18 13:40:56.002055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:00.023 [2024-11-18 13:40:56.002063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:00.023 [2024-11-18 13:40:56.002071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.023 [2024-11-18 13:40:56.002081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:00.023 [2024-11-18 13:40:56.002089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:28:00.023 [2024-11-18 13:40:56.002096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.023 [2024-11-18 13:40:56.002141] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:00.023 [2024-11-18 13:40:56.002151] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:04.232 [2024-11-18 13:40:59.656937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.657280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:04.232 [2024-11-18 13:40:59.657307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3654.782 ms 00:28:04.232 [2024-11-18 13:40:59.657318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.671034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.671085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:04.232 [2024-11-18 13:40:59.671100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.588 ms 00:28:04.232 [2024-11-18 13:40:59.671110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.671218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.671231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:04.232 [2024-11-18 13:40:59.671241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:28:04.232 [2024-11-18 13:40:59.671250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.684149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.684208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:04.232 [2024-11-18 13:40:59.684226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.833 ms 00:28:04.232 [2024-11-18 13:40:59.684235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.684279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.684288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:04.232 [2024-11-18 13:40:59.684302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:04.232 [2024-11-18 13:40:59.684314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.684905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.684944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:04.232 [2024-11-18 13:40:59.684957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:28:04.232 [2024-11-18 13:40:59.684966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.685030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.685040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:04.232 [2024-11-18 13:40:59.685049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:04.232 [2024-11-18 13:40:59.685064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.693402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.693438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:04.232 [2024-11-18 13:40:59.693450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.311 ms 00:28:04.232 [2024-11-18 13:40:59.693459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.697304] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:04.232 [2024-11-18 13:40:59.697355] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:04.232 [2024-11-18 13:40:59.697368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.697377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:04.232 [2024-11-18 13:40:59.697387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.817 ms 00:28:04.232 [2024-11-18 13:40:59.697395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.702254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.702308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:04.232 [2024-11-18 13:40:59.702319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.805 ms 00:28:04.232 [2024-11-18 13:40:59.702327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.704932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.704979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:04.232 [2024-11-18 13:40:59.704989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.549 ms 00:28:04.232 [2024-11-18 13:40:59.704998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.707580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.707627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:04.232 [2024-11-18 13:40:59.707637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.535 ms 00:28:04.232 [2024-11-18 13:40:59.707645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.707988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.708003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:04.232 [2024-11-18 13:40:59.708014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:28:04.232 [2024-11-18 13:40:59.708023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.741825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.741893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:04.232 [2024-11-18 13:40:59.741908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.781 ms 00:28:04.232 [2024-11-18 13:40:59.741917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.750152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:04.232 [2024-11-18 13:40:59.751087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.751134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:04.232 [2024-11-18 13:40:59.751146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.115 ms 00:28:04.232 [2024-11-18 13:40:59.751154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.751265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.751279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:04.232 [2024-11-18 13:40:59.751290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:04.232 [2024-11-18 13:40:59.751298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.751350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.751363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:04.232 [2024-11-18 13:40:59.751376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:04.232 [2024-11-18 13:40:59.751384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.751407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.751415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:04.232 [2024-11-18 13:40:59.751426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:04.232 [2024-11-18 13:40:59.751435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.751472] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:04.232 [2024-11-18 13:40:59.751490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.751498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:04.232 [2024-11-18 13:40:59.751509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:04.232 [2024-11-18 13:40:59.751520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.756427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.756477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:04.232 [2024-11-18 13:40:59.756488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.885 ms 00:28:04.232 [2024-11-18 13:40:59.756497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.756591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:40:59.756607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:04.232 [2024-11-18 13:40:59.756620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:04.232 [2024-11-18 13:40:59.756629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:40:59.757863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3773.287 ms, result 0 00:28:04.232 [2024-11-18 13:40:59.771646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.232 [2024-11-18 13:40:59.787637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:04.232 [2024-11-18 13:40:59.795761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:04.232 13:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.232 13:40:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:04.232 13:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:04.232 13:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:04.232 13:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:04.232 [2024-11-18 13:41:00.027805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.232 [2024-11-18 13:41:00.027862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:04.232 [2024-11-18 13:41:00.027875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:04.232 [2024-11-18 13:41:00.027885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.232 [2024-11-18 13:41:00.027907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.233 [2024-11-18 13:41:00.027917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:04.233 [2024-11-18 13:41:00.027926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:04.233 [2024-11-18 13:41:00.027938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.233 [2024-11-18 13:41:00.027959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.233 [2024-11-18 13:41:00.027968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:04.233 [2024-11-18 13:41:00.027982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:04.233 [2024-11-18 13:41:00.027990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.233 [2024-11-18 13:41:00.028059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.237 ms, result 0 00:28:04.233 true 00:28:04.233 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:04.233 { 00:28:04.233 "name": "ftl", 00:28:04.233 "properties": [ 00:28:04.233 { 00:28:04.233 "name": "superblock_version", 00:28:04.233 "value": 5, 00:28:04.233 "read-only": true 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "name": "base_device", 00:28:04.233 "bands": [ 00:28:04.233 { 00:28:04.233 "id": 0, 00:28:04.233 "state": "CLOSED", 00:28:04.233 "validity": 1.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 1, 00:28:04.233 "state": "CLOSED", 00:28:04.233 "validity": 1.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 2, 00:28:04.233 "state": "CLOSED", 00:28:04.233 "validity": 0.007843137254901933 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 3, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 4, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 5, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 6, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 7, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 8, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 9, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 10, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 11, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 12, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 13, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 14, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 15, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 16, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 17, 00:28:04.233 "state": "FREE", 00:28:04.233 "validity": 0.0 00:28:04.233 } 00:28:04.233 ], 00:28:04.233 "read-only": true 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "name": "cache_device", 00:28:04.233 "type": "bdev", 00:28:04.233 "chunks": [ 00:28:04.233 { 00:28:04.233 "id": 0, 00:28:04.233 "state": "INACTIVE", 00:28:04.233 "utilization": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 1, 00:28:04.233 "state": "OPEN", 00:28:04.233 "utilization": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 2, 00:28:04.233 "state": "OPEN", 00:28:04.233 "utilization": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 3, 00:28:04.233 "state": "FREE", 00:28:04.233 "utilization": 0.0 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "id": 4, 00:28:04.233 "state": "FREE", 00:28:04.233 "utilization": 0.0 00:28:04.233 } 00:28:04.233 ], 00:28:04.233 "read-only": true 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "name": "verbose_mode", 00:28:04.233 "value": true, 00:28:04.233 "unit": "", 00:28:04.233 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:04.233 }, 00:28:04.233 { 00:28:04.233 "name": "prep_upgrade_on_shutdown", 00:28:04.233 "value": false, 00:28:04.233 "unit": "", 00:28:04.233 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:04.233 } 00:28:04.233 ] 00:28:04.233 } 00:28:04.233 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:04.233 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:04.233 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:04.494 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:04.494 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:04.494 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:04.494 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:04.494 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:04.756 Validate MD5 checksum, iteration 1 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:04.756 13:41:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.756 [2024-11-18 13:41:00.810803] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:28:04.756 [2024-11-18 13:41:00.810933] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92427 ] 00:28:05.018 [2024-11-18 13:41:00.973867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.018 [2024-11-18 13:41:01.003583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.404  [2024-11-18T13:41:03.105Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-18T13:41:04.049Z] Copying: 1024/1024 [MB] (average 612 MBps) 00:28:07.921 00:28:07.921 13:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:07.921 13:41:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.467 Validate MD5 checksum, iteration 2 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=995ac2e07ce8e33c122647c3d30e6a78 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 995ac2e07ce8e33c122647c3d30e6a78 != \9\9\5\a\c\2\e\0\7\c\e\8\e\3\3\c\1\2\2\6\4\7\c\3\d\3\0\e\6\a\7\8 ]] 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:10.467 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:10.468 13:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.468 [2024-11-18 13:41:06.163114] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:28:10.468 [2024-11-18 13:41:06.163302] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92488 ] 00:28:10.468 [2024-11-18 13:41:06.323435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.468 [2024-11-18 13:41:06.347497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.851  [2024-11-18T13:41:08.550Z] Copying: 588/1024 [MB] (588 MBps) [2024-11-18T13:41:09.123Z] Copying: 1024/1024 [MB] (average 581 MBps) 00:28:12.995 00:28:12.995 13:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:12.995 13:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f80931dc01970539a8aa4b1de7c7bd31 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f80931dc01970539a8aa4b1de7c7bd31 != \f\8\0\9\3\1\d\c\0\1\9\7\0\5\3\9\a\8\a\a\4\b\1\d\e\7\c\7\b\d\3\1 ]] 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92357 ]] 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92357 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:14.910 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92545 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92545 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92545 ']' 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.910 13:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.171 [2024-11-18 13:41:11.077913] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:28:15.171 [2024-11-18 13:41:11.078040] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92545 ] 00:28:15.171 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 92357 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:15.171 [2024-11-18 13:41:11.235231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.171 [2024-11-18 13:41:11.274981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.746 [2024-11-18 13:41:11.597481] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:15.746 [2024-11-18 13:41:11.597557] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:15.746 [2024-11-18 13:41:11.749987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.750042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:15.746 [2024-11-18 13:41:11.750061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:15.746 [2024-11-18 13:41:11.750070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.750135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.750146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:15.746 [2024-11-18 13:41:11.750161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:28:15.746 [2024-11-18 13:41:11.750190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.750218] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:15.746 [2024-11-18 13:41:11.750629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:15.746 [2024-11-18 13:41:11.750676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.750685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:15.746 [2024-11-18 13:41:11.750695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:28:15.746 [2024-11-18 13:41:11.750703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.751050] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:15.746 [2024-11-18 13:41:11.757270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.757324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:15.746 [2024-11-18 13:41:11.757339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.220 ms 00:28:15.746 [2024-11-18 13:41:11.757348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.758846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.758886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:15.746 [2024-11-18 13:41:11.758897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:15.746 [2024-11-18 13:41:11.758908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.759245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.759259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:15.746 [2024-11-18 13:41:11.759269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:28:15.746 [2024-11-18 13:41:11.759278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.759317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.759326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:15.746 [2024-11-18 13:41:11.759335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:15.746 [2024-11-18 13:41:11.759343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.759395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.759407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:15.746 [2024-11-18 13:41:11.759421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:15.746 [2024-11-18 13:41:11.759428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.759453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:15.746 [2024-11-18 13:41:11.760721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.760776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:15.746 [2024-11-18 13:41:11.760786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.272 ms 00:28:15.746 [2024-11-18 13:41:11.760794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.760829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.760842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:15.746 [2024-11-18 13:41:11.760851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:15.746 [2024-11-18 13:41:11.760858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.760879] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:15.746 [2024-11-18 13:41:11.760898] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:15.746 [2024-11-18 13:41:11.760934] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:15.746 [2024-11-18 13:41:11.760953] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:15.746 [2024-11-18 13:41:11.761060] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:15.746 [2024-11-18 13:41:11.761078] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:15.746 [2024-11-18 13:41:11.761090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:15.746 [2024-11-18 13:41:11.761101] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:15.746 [2024-11-18 13:41:11.761110] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:15.746 [2024-11-18 13:41:11.761119] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:15.746 [2024-11-18 13:41:11.761127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:15.746 [2024-11-18 13:41:11.761135] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:15.746 [2024-11-18 13:41:11.761142] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:15.746 [2024-11-18 13:41:11.761154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.761187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:15.746 [2024-11-18 13:41:11.761199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:28:15.746 [2024-11-18 13:41:11.761207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.761292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.746 [2024-11-18 13:41:11.761315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:15.746 [2024-11-18 13:41:11.761324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:15.746 [2024-11-18 13:41:11.761331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.746 [2024-11-18 13:41:11.761437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:15.746 [2024-11-18 13:41:11.761449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:15.746 [2024-11-18 13:41:11.761459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:15.746 [2024-11-18 13:41:11.761470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.746 [2024-11-18 13:41:11.761482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:15.746 [2024-11-18 13:41:11.761490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:15.746 [2024-11-18 13:41:11.761502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:15.746 [2024-11-18 13:41:11.761510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:15.746 [2024-11-18 13:41:11.761518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:15.746 [2024-11-18 13:41:11.761525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.746 [2024-11-18 13:41:11.761533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:15.746 [2024-11-18 13:41:11.761541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:15.746 [2024-11-18 13:41:11.761548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.746 [2024-11-18 13:41:11.761570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:15.746 [2024-11-18 13:41:11.761578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:15.746 [2024-11-18 13:41:11.761586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.746 [2024-11-18 13:41:11.761594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:15.746 [2024-11-18 13:41:11.761601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:15.746 [2024-11-18 13:41:11.761609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.746 [2024-11-18 13:41:11.761617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:15.746 [2024-11-18 13:41:11.761625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:15.747 [2024-11-18 13:41:11.761633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:15.747 [2024-11-18 13:41:11.761640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:15.747 [2024-11-18 13:41:11.761648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:15.747 [2024-11-18 13:41:11.761654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:15.747 [2024-11-18 13:41:11.761662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:15.747 [2024-11-18 13:41:11.761669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:15.747 [2024-11-18 13:41:11.761677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:15.747 [2024-11-18 13:41:11.761685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:15.747 [2024-11-18 13:41:11.761695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:15.747 [2024-11-18 13:41:11.761702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:15.747 [2024-11-18 13:41:11.761710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:15.747 [2024-11-18 13:41:11.761717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:15.747 [2024-11-18 13:41:11.761724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.747 [2024-11-18 13:41:11.761733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:15.747 [2024-11-18 13:41:11.761741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:15.747 [2024-11-18 13:41:11.761748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.747 [2024-11-18 13:41:11.761756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:15.747 [2024-11-18 13:41:11.761766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:15.747 [2024-11-18 13:41:11.761774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.747 [2024-11-18 13:41:11.761781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:15.747 [2024-11-18 13:41:11.761789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:15.747 [2024-11-18 13:41:11.761797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.747 [2024-11-18 13:41:11.761804] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:15.747 [2024-11-18 13:41:11.761812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:15.747 [2024-11-18 13:41:11.761822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:15.747 [2024-11-18 13:41:11.761829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:15.747 [2024-11-18 13:41:11.761841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:15.747 [2024-11-18 13:41:11.761847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:15.747 [2024-11-18 13:41:11.761854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:15.747 [2024-11-18 13:41:11.761860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:15.747 [2024-11-18 13:41:11.761866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:15.747 [2024-11-18 13:41:11.761873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:15.747 [2024-11-18 13:41:11.761882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:15.747 [2024-11-18 13:41:11.761891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:15.747 [2024-11-18 13:41:11.761906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:15.747 [2024-11-18 13:41:11.761926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:15.747 [2024-11-18 13:41:11.761933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:15.747 [2024-11-18 13:41:11.761943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:15.747 [2024-11-18 13:41:11.761951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.761993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:15.747 [2024-11-18 13:41:11.762000] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:15.747 [2024-11-18 13:41:11.762010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.762018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:15.747 [2024-11-18 13:41:11.762026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:15.747 [2024-11-18 13:41:11.762033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:15.747 [2024-11-18 13:41:11.762040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:15.747 [2024-11-18 13:41:11.762048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.747 [2024-11-18 13:41:11.762056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:15.747 [2024-11-18 13:41:11.762068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.680 ms 00:28:15.747 [2024-11-18 13:41:11.762075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.747 [2024-11-18 13:41:11.773638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.747 [2024-11-18 13:41:11.773684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:15.747 [2024-11-18 13:41:11.773702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.512 ms 00:28:15.747 [2024-11-18 13:41:11.773713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.747 [2024-11-18 13:41:11.773758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.747 [2024-11-18 13:41:11.773767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:15.747 [2024-11-18 13:41:11.773776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:15.747 [2024-11-18 13:41:11.773787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.747 [2024-11-18 13:41:11.786933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.747 [2024-11-18 13:41:11.786978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:15.747 [2024-11-18 13:41:11.786989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.080 ms 00:28:15.747 [2024-11-18 13:41:11.786997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.747 [2024-11-18 13:41:11.787038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.747 [2024-11-18 13:41:11.787047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:15.747 [2024-11-18 13:41:11.787060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:15.747 [2024-11-18 13:41:11.787068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.747 [2024-11-18 13:41:11.787226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.747 [2024-11-18 13:41:11.787241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:15.747 [2024-11-18 13:41:11.787251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.096 ms 00:28:15.747 [2024-11-18 13:41:11.787264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.747 [2024-11-18 13:41:11.787309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.747 [2024-11-18 13:41:11.787326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:15.747 [2024-11-18 13:41:11.787336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:15.747 [2024-11-18 13:41:11.787346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.796039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.796079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:15.748 [2024-11-18 13:41:11.796090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.663 ms 00:28:15.748 [2024-11-18 13:41:11.796098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.796221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.796233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:15.748 [2024-11-18 13:41:11.796245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:15.748 [2024-11-18 13:41:11.796256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.812849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.812935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:15.748 [2024-11-18 13:41:11.812962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.570 ms 00:28:15.748 [2024-11-18 13:41:11.812980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.815677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.815736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:15.748 [2024-11-18 13:41:11.815771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:28:15.748 [2024-11-18 13:41:11.815796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.840727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.840784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:15.748 [2024-11-18 13:41:11.840797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.832 ms 00:28:15.748 [2024-11-18 13:41:11.840806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.840952] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:15.748 [2024-11-18 13:41:11.841045] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:15.748 [2024-11-18 13:41:11.841138] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:15.748 [2024-11-18 13:41:11.841243] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:15.748 [2024-11-18 13:41:11.841254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.841262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:15.748 [2024-11-18 13:41:11.841272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:28:15.748 [2024-11-18 13:41:11.841285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.841347] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:15.748 [2024-11-18 13:41:11.841360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.841369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:15.748 [2024-11-18 13:41:11.841382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:15.748 [2024-11-18 13:41:11.841391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.846138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.846205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:15.748 [2024-11-18 13:41:11.846216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.722 ms 00:28:15.748 [2024-11-18 13:41:11.846229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.847227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.847262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:15.748 [2024-11-18 13:41:11.847272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:15.748 [2024-11-18 13:41:11.847281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.748 [2024-11-18 13:41:11.847348] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:15.748 [2024-11-18 13:41:11.847564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.748 [2024-11-18 13:41:11.847580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:15.748 [2024-11-18 13:41:11.847593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:28:15.748 [2024-11-18 13:41:11.847601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.688 [2024-11-18 13:41:12.561130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.688 [2024-11-18 13:41:12.561241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:16.688 [2024-11-18 13:41:12.561262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 713.122 ms 00:28:16.688 [2024-11-18 13:41:12.561273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.688 [2024-11-18 13:41:12.562945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.688 [2024-11-18 13:41:12.562988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:16.688 [2024-11-18 13:41:12.563009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.148 ms 00:28:16.688 [2024-11-18 13:41:12.563018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.688 [2024-11-18 13:41:12.563769] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:16.688 [2024-11-18 13:41:12.563855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.688 [2024-11-18 13:41:12.563866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:16.688 [2024-11-18 13:41:12.563877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.716 ms 00:28:16.688 [2024-11-18 13:41:12.563885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.688 [2024-11-18 13:41:12.563936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.688 [2024-11-18 13:41:12.563952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:16.688 [2024-11-18 13:41:12.563961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:16.688 [2024-11-18 13:41:12.563971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.688 [2024-11-18 13:41:12.564013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 716.665 ms, result 0 00:28:16.688 [2024-11-18 13:41:12.564064] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:16.688 [2024-11-18 13:41:12.564198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.688 [2024-11-18 13:41:12.564214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:16.688 [2024-11-18 13:41:12.564223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.135 ms 00:28:16.688 [2024-11-18 13:41:12.564231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.450239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.450305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:17.632 [2024-11-18 13:41:13.450321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 885.410 ms 00:28:17.632 [2024-11-18 13:41:13.450329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.452440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.452485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:17.632 [2024-11-18 13:41:13.452495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.613 ms 00:28:17.632 [2024-11-18 13:41:13.452504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.453492] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:17.632 [2024-11-18 13:41:13.453536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.453545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:17.632 [2024-11-18 13:41:13.453554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.999 ms 00:28:17.632 [2024-11-18 13:41:13.453562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.453599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.453609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:17.632 [2024-11-18 13:41:13.453618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:17.632 [2024-11-18 13:41:13.453626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.453665] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 889.596 ms, result 0 00:28:17.632 [2024-11-18 13:41:13.453711] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:17.632 [2024-11-18 13:41:13.453723] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:17.632 [2024-11-18 13:41:13.453733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.453742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:17.632 [2024-11-18 13:41:13.453751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1606.409 ms 00:28:17.632 [2024-11-18 13:41:13.453774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.453805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.453815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:17.632 [2024-11-18 13:41:13.453824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:17.632 [2024-11-18 13:41:13.453833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.463052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:17.632 [2024-11-18 13:41:13.463222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.463236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:17.632 [2024-11-18 13:41:13.463251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.373 ms 00:28:17.632 [2024-11-18 13:41:13.463259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.463979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.464004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:17.632 [2024-11-18 13:41:13.464014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.639 ms 00:28:17.632 [2024-11-18 13:41:13.464022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.466260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.466289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:17.632 [2024-11-18 13:41:13.466299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.220 ms 00:28:17.632 [2024-11-18 13:41:13.466307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.466349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.466359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:17.632 [2024-11-18 13:41:13.466367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:17.632 [2024-11-18 13:41:13.466376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.466488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.466498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:17.632 [2024-11-18 13:41:13.466509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:17.632 [2024-11-18 13:41:13.466524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.466546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.466554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:17.632 [2024-11-18 13:41:13.466562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:17.632 [2024-11-18 13:41:13.466570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.466610] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:17.632 [2024-11-18 13:41:13.466620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.466628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:17.632 [2024-11-18 13:41:13.466636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:17.632 [2024-11-18 13:41:13.466648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.466705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.632 [2024-11-18 13:41:13.466716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:17.632 [2024-11-18 13:41:13.466724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:17.632 [2024-11-18 13:41:13.466732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.632 [2024-11-18 13:41:13.467877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1717.410 ms, result 0 00:28:17.632 [2024-11-18 13:41:13.483622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.632 [2024-11-18 13:41:13.499626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:17.633 [2024-11-18 13:41:13.507748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:17.633 Validate MD5 checksum, iteration 1 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:17.633 13:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:17.633 [2024-11-18 13:41:13.728200] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:28:17.633 [2024-11-18 13:41:13.728334] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92580 ] 00:28:17.892 [2024-11-18 13:41:13.887698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.892 [2024-11-18 13:41:13.917922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.285  [2024-11-18T13:41:15.679Z] Copying: 822/1024 [MB] (822 MBps) [2024-11-18T13:41:16.246Z] Copying: 1024/1024 [MB] (average 820 MBps) 00:28:20.118 00:28:20.118 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:20.118 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:22.656 Validate MD5 checksum, iteration 2 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=995ac2e07ce8e33c122647c3d30e6a78 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 995ac2e07ce8e33c122647c3d30e6a78 != \9\9\5\a\c\2\e\0\7\c\e\8\e\3\3\c\1\2\2\6\4\7\c\3\d\3\0\e\6\a\7\8 ]] 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:22.656 13:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:22.656 [2024-11-18 13:41:18.338594] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:28:22.656 [2024-11-18 13:41:18.338923] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92630 ] 00:28:22.656 [2024-11-18 13:41:18.497953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.656 [2024-11-18 13:41:18.524046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.042  [2024-11-18T13:41:20.741Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-18T13:41:24.937Z] Copying: 1024/1024 [MB] (average 620 MBps) 00:28:28.809 00:28:28.809 13:41:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:28.809 13:41:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f80931dc01970539a8aa4b1de7c7bd31 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f80931dc01970539a8aa4b1de7c7bd31 != \f\8\0\9\3\1\d\c\0\1\9\7\0\5\3\9\a\8\a\a\4\b\1\d\e\7\c\7\b\d\3\1 ]] 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:30.711 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92545 ]] 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92545 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92545 ']' 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 92545 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:30.972 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.973 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92545 00:28:30.973 killing process with pid 92545 00:28:30.973 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.973 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.973 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92545' 00:28:30.973 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 92545 00:28:30.973 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 92545 00:28:30.973 [2024-11-18 13:41:27.019376] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:30.973 [2024-11-18 13:41:27.023472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.023505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:30.973 [2024-11-18 13:41:27.023515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:30.973 [2024-11-18 13:41:27.023522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.023539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:30.973 [2024-11-18 13:41:27.023906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.023925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:30.973 [2024-11-18 13:41:27.023936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:28:30.973 [2024-11-18 13:41:27.023942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.024119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.024130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:30.973 [2024-11-18 13:41:27.024137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:28:30.973 [2024-11-18 13:41:27.024143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.025452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.025477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:30.973 [2024-11-18 13:41:27.025485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.296 ms 00:28:30.973 [2024-11-18 13:41:27.025491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.026382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.026400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:30.973 [2024-11-18 13:41:27.026408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.864 ms 00:28:30.973 [2024-11-18 13:41:27.026419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.028358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.028386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:30.973 [2024-11-18 13:41:27.028393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.914 ms 00:28:30.973 [2024-11-18 13:41:27.028402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.030059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.030090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:30.973 [2024-11-18 13:41:27.030097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.631 ms 00:28:30.973 [2024-11-18 13:41:27.030103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.030161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.030177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:30.973 [2024-11-18 13:41:27.030183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:30.973 [2024-11-18 13:41:27.030192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.031719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.031746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:30.973 [2024-11-18 13:41:27.031753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.512 ms 00:28:30.973 [2024-11-18 13:41:27.031758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.034221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.034245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:30.973 [2024-11-18 13:41:27.034252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.438 ms 00:28:30.973 [2024-11-18 13:41:27.034257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.036006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.036031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:30.973 [2024-11-18 13:41:27.036038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.725 ms 00:28:30.973 [2024-11-18 13:41:27.036043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.037518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.037544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:30.973 [2024-11-18 13:41:27.037551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.430 ms 00:28:30.973 [2024-11-18 13:41:27.037556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.037580] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:30.973 [2024-11-18 13:41:27.037596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:30.973 [2024-11-18 13:41:27.037604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:30.973 [2024-11-18 13:41:27.037611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:30.973 [2024-11-18 13:41:27.037617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:30.973 [2024-11-18 13:41:27.037707] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:30.973 [2024-11-18 13:41:27.037713] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 61d510a8-38b2-4b29-bdfc-ab6d703271c6 00:28:30.973 [2024-11-18 13:41:27.037719] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:30.973 [2024-11-18 13:41:27.037724] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:30.973 [2024-11-18 13:41:27.037730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:30.973 [2024-11-18 13:41:27.037735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:30.973 [2024-11-18 13:41:27.037741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:30.973 [2024-11-18 13:41:27.037749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:30.973 [2024-11-18 13:41:27.037754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:30.973 [2024-11-18 13:41:27.037759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:30.973 [2024-11-18 13:41:27.037765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:30.973 [2024-11-18 13:41:27.037770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.037777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:30.973 [2024-11-18 13:41:27.037784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:28:30.973 [2024-11-18 13:41:27.037789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.038960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.038978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:30.973 [2024-11-18 13:41:27.038984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:28:30.973 [2024-11-18 13:41:27.038990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.039057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.973 [2024-11-18 13:41:27.039064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:30.973 [2024-11-18 13:41:27.039071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:28:30.973 [2024-11-18 13:41:27.039076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.043482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.973 [2024-11-18 13:41:27.043509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:30.973 [2024-11-18 13:41:27.043516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.973 [2024-11-18 13:41:27.043522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.973 [2024-11-18 13:41:27.043545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.973 [2024-11-18 13:41:27.043552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:30.973 [2024-11-18 13:41:27.043558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.973 [2024-11-18 13:41:27.043563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.043601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.043609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:30.974 [2024-11-18 13:41:27.043615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.043621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.043639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.043646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:30.974 [2024-11-18 13:41:27.043652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.043658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.051524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.051557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:30.974 [2024-11-18 13:41:27.051565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.051571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.057446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:30.974 [2024-11-18 13:41:27.057460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.057466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.057523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:30.974 [2024-11-18 13:41:27.057529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.057535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.057571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:30.974 [2024-11-18 13:41:27.057580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.057586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.057645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:30.974 [2024-11-18 13:41:27.057650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.057656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.057685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:30.974 [2024-11-18 13:41:27.057694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.057701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.057736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:30.974 [2024-11-18 13:41:27.057744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.057750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:30.974 [2024-11-18 13:41:27.057789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:30.974 [2024-11-18 13:41:27.057797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:30.974 [2024-11-18 13:41:27.057803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.974 [2024-11-18 13:41:27.057892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 34.398 ms, result 0 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:31.980 Remove shared memory files 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92357 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:31.980 00:28:31.980 real 1m20.883s 00:28:31.980 user 1m44.674s 00:28:31.980 sys 0m21.044s 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.980 ************************************ 00:28:31.980 END TEST ftl_upgrade_shutdown 00:28:31.980 ************************************ 00:28:31.980 13:41:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.254 13:41:28 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:32.254 13:41:28 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:32.254 13:41:28 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:32.254 13:41:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.254 13:41:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:32.254 ************************************ 00:28:32.254 START TEST ftl_restore_fast 00:28:32.254 ************************************ 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:32.254 * Looking for test storage... 00:28:32.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:32.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.254 --rc genhtml_branch_coverage=1 00:28:32.254 --rc genhtml_function_coverage=1 00:28:32.254 --rc genhtml_legend=1 00:28:32.254 --rc geninfo_all_blocks=1 00:28:32.254 --rc geninfo_unexecuted_blocks=1 00:28:32.254 00:28:32.254 ' 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:32.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.254 --rc genhtml_branch_coverage=1 00:28:32.254 --rc genhtml_function_coverage=1 00:28:32.254 --rc genhtml_legend=1 00:28:32.254 --rc geninfo_all_blocks=1 00:28:32.254 --rc geninfo_unexecuted_blocks=1 00:28:32.254 00:28:32.254 ' 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:32.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.254 --rc genhtml_branch_coverage=1 00:28:32.254 --rc genhtml_function_coverage=1 00:28:32.254 --rc genhtml_legend=1 00:28:32.254 --rc geninfo_all_blocks=1 00:28:32.254 --rc geninfo_unexecuted_blocks=1 00:28:32.254 00:28:32.254 ' 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:32.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.254 --rc genhtml_branch_coverage=1 00:28:32.254 --rc genhtml_function_coverage=1 00:28:32.254 --rc genhtml_legend=1 00:28:32.254 --rc geninfo_all_blocks=1 00:28:32.254 --rc geninfo_unexecuted_blocks=1 00:28:32.254 00:28:32.254 ' 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:32.254 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Gq45tivz05 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92808 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92808 00:28:32.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 92808 ']' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.255 13:41:28 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:32.516 [2024-11-18 13:41:28.383803] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:28:32.516 [2024-11-18 13:41:28.384229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92808 ] 00:28:32.516 [2024-11-18 13:41:28.546146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.516 [2024-11-18 13:41:28.575367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:33.460 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:33.721 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:33.721 { 00:28:33.721 "name": "nvme0n1", 00:28:33.721 "aliases": [ 00:28:33.721 "a702c1f7-79a1-4368-8eda-54c7e4948a73" 00:28:33.721 ], 00:28:33.721 "product_name": "NVMe disk", 00:28:33.721 "block_size": 4096, 00:28:33.721 "num_blocks": 1310720, 00:28:33.721 "uuid": "a702c1f7-79a1-4368-8eda-54c7e4948a73", 00:28:33.721 "numa_id": -1, 00:28:33.721 "assigned_rate_limits": { 00:28:33.721 "rw_ios_per_sec": 0, 00:28:33.721 "rw_mbytes_per_sec": 0, 00:28:33.721 "r_mbytes_per_sec": 0, 00:28:33.721 "w_mbytes_per_sec": 0 00:28:33.721 }, 00:28:33.721 "claimed": true, 00:28:33.721 "claim_type": "read_many_write_one", 00:28:33.721 "zoned": false, 00:28:33.721 "supported_io_types": { 00:28:33.721 "read": true, 00:28:33.721 "write": true, 00:28:33.721 "unmap": true, 00:28:33.721 "flush": true, 00:28:33.721 "reset": true, 00:28:33.721 "nvme_admin": true, 00:28:33.721 "nvme_io": true, 00:28:33.721 "nvme_io_md": false, 00:28:33.721 "write_zeroes": true, 00:28:33.721 "zcopy": false, 00:28:33.721 "get_zone_info": false, 00:28:33.721 "zone_management": false, 00:28:33.722 "zone_append": false, 00:28:33.722 "compare": true, 00:28:33.722 "compare_and_write": false, 00:28:33.722 "abort": true, 00:28:33.722 "seek_hole": false, 00:28:33.722 "seek_data": false, 00:28:33.722 "copy": true, 00:28:33.722 "nvme_iov_md": false 00:28:33.722 }, 00:28:33.722 "driver_specific": { 00:28:33.722 "nvme": [ 00:28:33.722 { 00:28:33.722 "pci_address": "0000:00:11.0", 00:28:33.722 "trid": { 00:28:33.722 "trtype": "PCIe", 00:28:33.722 "traddr": "0000:00:11.0" 00:28:33.722 }, 00:28:33.722 "ctrlr_data": { 00:28:33.722 "cntlid": 0, 00:28:33.722 "vendor_id": "0x1b36", 00:28:33.722 "model_number": "QEMU NVMe Ctrl", 00:28:33.722 "serial_number": "12341", 00:28:33.722 "firmware_revision": "8.0.0", 00:28:33.722 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:33.722 "oacs": { 00:28:33.722 "security": 0, 00:28:33.722 "format": 1, 00:28:33.722 "firmware": 0, 00:28:33.722 "ns_manage": 1 00:28:33.722 }, 00:28:33.722 "multi_ctrlr": false, 00:28:33.722 "ana_reporting": false 00:28:33.722 }, 00:28:33.722 "vs": { 00:28:33.722 "nvme_version": "1.4" 00:28:33.722 }, 00:28:33.722 "ns_data": { 00:28:33.722 "id": 1, 00:28:33.722 "can_share": false 00:28:33.722 } 00:28:33.722 } 00:28:33.722 ], 00:28:33.722 "mp_policy": "active_passive" 00:28:33.722 } 00:28:33.722 } 00:28:33.722 ]' 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:33.722 13:41:29 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:33.983 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=87bbc1bd-e058-4ebd-8503-a060c53a4adb 00:28:33.983 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:33.983 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87bbc1bd-e058-4ebd-8503-a060c53a4adb 00:28:34.243 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:34.502 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=17fb5234-0e2f-4d47-9404-39b53d0a0e44 00:28:34.502 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 17fb5234-0e2f-4d47-9404-39b53d0a0e44 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:34.761 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:34.761 { 00:28:34.761 "name": "c1641ec5-79b0-4e09-b8b8-004416036aae", 00:28:34.761 "aliases": [ 00:28:34.761 "lvs/nvme0n1p0" 00:28:34.761 ], 00:28:34.761 "product_name": "Logical Volume", 00:28:34.761 "block_size": 4096, 00:28:34.761 "num_blocks": 26476544, 00:28:34.761 "uuid": "c1641ec5-79b0-4e09-b8b8-004416036aae", 00:28:34.761 "assigned_rate_limits": { 00:28:34.761 "rw_ios_per_sec": 0, 00:28:34.761 "rw_mbytes_per_sec": 0, 00:28:34.761 "r_mbytes_per_sec": 0, 00:28:34.761 "w_mbytes_per_sec": 0 00:28:34.761 }, 00:28:34.761 "claimed": false, 00:28:34.761 "zoned": false, 00:28:34.761 "supported_io_types": { 00:28:34.761 "read": true, 00:28:34.761 "write": true, 00:28:34.761 "unmap": true, 00:28:34.761 "flush": false, 00:28:34.761 "reset": true, 00:28:34.761 "nvme_admin": false, 00:28:34.761 "nvme_io": false, 00:28:34.761 "nvme_io_md": false, 00:28:34.761 "write_zeroes": true, 00:28:34.761 "zcopy": false, 00:28:34.761 "get_zone_info": false, 00:28:34.761 "zone_management": false, 00:28:34.761 "zone_append": false, 00:28:34.761 "compare": false, 00:28:34.761 "compare_and_write": false, 00:28:34.761 "abort": false, 00:28:34.761 "seek_hole": true, 00:28:34.761 "seek_data": true, 00:28:34.761 "copy": false, 00:28:34.761 "nvme_iov_md": false 00:28:34.761 }, 00:28:34.761 "driver_specific": { 00:28:34.761 "lvol": { 00:28:34.761 "lvol_store_uuid": "17fb5234-0e2f-4d47-9404-39b53d0a0e44", 00:28:34.761 "base_bdev": "nvme0n1", 00:28:34.761 "thin_provision": true, 00:28:34.761 "num_allocated_clusters": 0, 00:28:34.761 "snapshot": false, 00:28:34.761 "clone": false, 00:28:34.761 "esnap_clone": false 00:28:34.761 } 00:28:34.761 } 00:28:34.761 } 00:28:34.761 ]' 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:35.020 13:41:30 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:35.278 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:35.538 { 00:28:35.538 "name": "c1641ec5-79b0-4e09-b8b8-004416036aae", 00:28:35.538 "aliases": [ 00:28:35.538 "lvs/nvme0n1p0" 00:28:35.538 ], 00:28:35.538 "product_name": "Logical Volume", 00:28:35.538 "block_size": 4096, 00:28:35.538 "num_blocks": 26476544, 00:28:35.538 "uuid": "c1641ec5-79b0-4e09-b8b8-004416036aae", 00:28:35.538 "assigned_rate_limits": { 00:28:35.538 "rw_ios_per_sec": 0, 00:28:35.538 "rw_mbytes_per_sec": 0, 00:28:35.538 "r_mbytes_per_sec": 0, 00:28:35.538 "w_mbytes_per_sec": 0 00:28:35.538 }, 00:28:35.538 "claimed": false, 00:28:35.538 "zoned": false, 00:28:35.538 "supported_io_types": { 00:28:35.538 "read": true, 00:28:35.538 "write": true, 00:28:35.538 "unmap": true, 00:28:35.538 "flush": false, 00:28:35.538 "reset": true, 00:28:35.538 "nvme_admin": false, 00:28:35.538 "nvme_io": false, 00:28:35.538 "nvme_io_md": false, 00:28:35.538 "write_zeroes": true, 00:28:35.538 "zcopy": false, 00:28:35.538 "get_zone_info": false, 00:28:35.538 "zone_management": false, 00:28:35.538 "zone_append": false, 00:28:35.538 "compare": false, 00:28:35.538 "compare_and_write": false, 00:28:35.538 "abort": false, 00:28:35.538 "seek_hole": true, 00:28:35.538 "seek_data": true, 00:28:35.538 "copy": false, 00:28:35.538 "nvme_iov_md": false 00:28:35.538 }, 00:28:35.538 "driver_specific": { 00:28:35.538 "lvol": { 00:28:35.538 "lvol_store_uuid": "17fb5234-0e2f-4d47-9404-39b53d0a0e44", 00:28:35.538 "base_bdev": "nvme0n1", 00:28:35.538 "thin_provision": true, 00:28:35.538 "num_allocated_clusters": 0, 00:28:35.538 "snapshot": false, 00:28:35.538 "clone": false, 00:28:35.538 "esnap_clone": false 00:28:35.538 } 00:28:35.538 } 00:28:35.538 } 00:28:35.538 ]' 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:35.538 13:41:31 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1641ec5-79b0-4e09-b8b8-004416036aae 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:35.797 { 00:28:35.797 "name": "c1641ec5-79b0-4e09-b8b8-004416036aae", 00:28:35.797 "aliases": [ 00:28:35.797 "lvs/nvme0n1p0" 00:28:35.797 ], 00:28:35.797 "product_name": "Logical Volume", 00:28:35.797 "block_size": 4096, 00:28:35.797 "num_blocks": 26476544, 00:28:35.797 "uuid": "c1641ec5-79b0-4e09-b8b8-004416036aae", 00:28:35.797 "assigned_rate_limits": { 00:28:35.797 "rw_ios_per_sec": 0, 00:28:35.797 "rw_mbytes_per_sec": 0, 00:28:35.797 "r_mbytes_per_sec": 0, 00:28:35.797 "w_mbytes_per_sec": 0 00:28:35.797 }, 00:28:35.797 "claimed": false, 00:28:35.797 "zoned": false, 00:28:35.797 "supported_io_types": { 00:28:35.797 "read": true, 00:28:35.797 "write": true, 00:28:35.797 "unmap": true, 00:28:35.797 "flush": false, 00:28:35.797 "reset": true, 00:28:35.797 "nvme_admin": false, 00:28:35.797 "nvme_io": false, 00:28:35.797 "nvme_io_md": false, 00:28:35.797 "write_zeroes": true, 00:28:35.797 "zcopy": false, 00:28:35.797 "get_zone_info": false, 00:28:35.797 "zone_management": false, 00:28:35.797 "zone_append": false, 00:28:35.797 "compare": false, 00:28:35.797 "compare_and_write": false, 00:28:35.797 "abort": false, 00:28:35.797 "seek_hole": true, 00:28:35.797 "seek_data": true, 00:28:35.797 "copy": false, 00:28:35.797 "nvme_iov_md": false 00:28:35.797 }, 00:28:35.797 "driver_specific": { 00:28:35.797 "lvol": { 00:28:35.797 "lvol_store_uuid": "17fb5234-0e2f-4d47-9404-39b53d0a0e44", 00:28:35.797 "base_bdev": "nvme0n1", 00:28:35.797 "thin_provision": true, 00:28:35.797 "num_allocated_clusters": 0, 00:28:35.797 "snapshot": false, 00:28:35.797 "clone": false, 00:28:35.797 "esnap_clone": false 00:28:35.797 } 00:28:35.797 } 00:28:35.797 } 00:28:35.797 ]' 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:35.797 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c1641ec5-79b0-4e09-b8b8-004416036aae --l2p_dram_limit 10' 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:36.057 13:41:31 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c1641ec5-79b0-4e09-b8b8-004416036aae --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:36.057 [2024-11-18 13:41:32.122945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.122986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:36.057 [2024-11-18 13:41:32.122997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.057 [2024-11-18 13:41:32.123005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.123046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.123055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.057 [2024-11-18 13:41:32.123062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:36.057 [2024-11-18 13:41:32.123072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.123090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:36.057 [2024-11-18 13:41:32.123326] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:36.057 [2024-11-18 13:41:32.123339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.123347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.057 [2024-11-18 13:41:32.123354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:28:36.057 [2024-11-18 13:41:32.123361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.123385] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f35d27a6-af2a-4f68-9f73-b853ff45a994 00:28:36.057 [2024-11-18 13:41:32.124343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.124363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:36.057 [2024-11-18 13:41:32.124374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:36.057 [2024-11-18 13:41:32.124381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.129140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.129263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.057 [2024-11-18 13:41:32.129281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:28:36.057 [2024-11-18 13:41:32.129287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.129350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.129359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.057 [2024-11-18 13:41:32.129370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:36.057 [2024-11-18 13:41:32.129375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.129420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.129427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:36.057 [2024-11-18 13:41:32.129434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:36.057 [2024-11-18 13:41:32.129440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.129458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:36.057 [2024-11-18 13:41:32.130736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.130761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.057 [2024-11-18 13:41:32.130768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:28:36.057 [2024-11-18 13:41:32.130775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.130801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.130809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:36.057 [2024-11-18 13:41:32.130815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:36.057 [2024-11-18 13:41:32.130824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.130836] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:36.057 [2024-11-18 13:41:32.130941] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:36.057 [2024-11-18 13:41:32.130950] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:36.057 [2024-11-18 13:41:32.130965] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:36.057 [2024-11-18 13:41:32.130973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:36.057 [2024-11-18 13:41:32.130983] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:36.057 [2024-11-18 13:41:32.130989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:36.057 [2024-11-18 13:41:32.130999] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:36.057 [2024-11-18 13:41:32.131004] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:36.057 [2024-11-18 13:41:32.131011] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:36.057 [2024-11-18 13:41:32.131016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.131024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:36.057 [2024-11-18 13:41:32.131030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:28:36.057 [2024-11-18 13:41:32.131037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.131099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.057 [2024-11-18 13:41:32.131111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:36.057 [2024-11-18 13:41:32.131117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:36.057 [2024-11-18 13:41:32.131125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.057 [2024-11-18 13:41:32.131221] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:36.057 [2024-11-18 13:41:32.131231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:36.057 [2024-11-18 13:41:32.131237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.057 [2024-11-18 13:41:32.131247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:36.057 [2024-11-18 13:41:32.131259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:36.057 [2024-11-18 13:41:32.131272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:36.057 [2024-11-18 13:41:32.131278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.057 [2024-11-18 13:41:32.131289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:36.057 [2024-11-18 13:41:32.131298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:36.057 [2024-11-18 13:41:32.131303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.057 [2024-11-18 13:41:32.131311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:36.057 [2024-11-18 13:41:32.131316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:36.057 [2024-11-18 13:41:32.131323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:36.057 [2024-11-18 13:41:32.131334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:36.057 [2024-11-18 13:41:32.131339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:36.057 [2024-11-18 13:41:32.131351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.057 [2024-11-18 13:41:32.131363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:36.057 [2024-11-18 13:41:32.131369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.057 [2024-11-18 13:41:32.131381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:36.057 [2024-11-18 13:41:32.131387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.057 [2024-11-18 13:41:32.131399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:36.057 [2024-11-18 13:41:32.131408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.057 [2024-11-18 13:41:32.131421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:36.057 [2024-11-18 13:41:32.131426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:36.057 [2024-11-18 13:41:32.131435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.058 [2024-11-18 13:41:32.131440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:36.058 [2024-11-18 13:41:32.131447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:36.058 [2024-11-18 13:41:32.131453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.058 [2024-11-18 13:41:32.131460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:36.058 [2024-11-18 13:41:32.131466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:36.058 [2024-11-18 13:41:32.131473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.058 [2024-11-18 13:41:32.131479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:36.058 [2024-11-18 13:41:32.131486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:36.058 [2024-11-18 13:41:32.131491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.058 [2024-11-18 13:41:32.131499] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:36.058 [2024-11-18 13:41:32.131506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:36.058 [2024-11-18 13:41:32.131515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.058 [2024-11-18 13:41:32.131521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.058 [2024-11-18 13:41:32.131530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:36.058 [2024-11-18 13:41:32.131536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:36.058 [2024-11-18 13:41:32.131543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:36.058 [2024-11-18 13:41:32.131549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:36.058 [2024-11-18 13:41:32.131556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:36.058 [2024-11-18 13:41:32.131561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:36.058 [2024-11-18 13:41:32.131571] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:36.058 [2024-11-18 13:41:32.131580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.058 [2024-11-18 13:41:32.131589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:36.058 [2024-11-18 13:41:32.131595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:36.058 [2024-11-18 13:41:32.131604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:36.058 [2024-11-18 13:41:32.131610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:36.058 [2024-11-18 13:41:32.131617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:36.058 [2024-11-18 13:41:32.131623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:36.058 [2024-11-18 13:41:32.131632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:36.058 [2024-11-18 13:41:32.131638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:36.058 [2024-11-18 13:41:32.131645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:36.058 [2024-11-18 13:41:32.131651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:36.058 [2024-11-18 13:41:32.131659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:36.058 [2024-11-18 13:41:32.131665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:36.058 [2024-11-18 13:41:32.131673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:36.058 [2024-11-18 13:41:32.131679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:36.058 [2024-11-18 13:41:32.131686] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:36.058 [2024-11-18 13:41:32.131693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.058 [2024-11-18 13:41:32.131701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:36.058 [2024-11-18 13:41:32.131707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:36.058 [2024-11-18 13:41:32.131716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:36.058 [2024-11-18 13:41:32.131722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:36.058 [2024-11-18 13:41:32.131730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.058 [2024-11-18 13:41:32.131737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:36.058 [2024-11-18 13:41:32.131748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:28:36.058 [2024-11-18 13:41:32.131754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.058 [2024-11-18 13:41:32.131788] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:36.058 [2024-11-18 13:41:32.131795] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:38.597 [2024-11-18 13:41:34.246990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.247062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:38.597 [2024-11-18 13:41:34.247079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2115.185 ms 00:28:38.597 [2024-11-18 13:41:34.247088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.257426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.257472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.597 [2024-11-18 13:41:34.257487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.208 ms 00:28:38.597 [2024-11-18 13:41:34.257496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.257614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.257624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:38.597 [2024-11-18 13:41:34.257638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:38.597 [2024-11-18 13:41:34.257646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.268043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.268088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.597 [2024-11-18 13:41:34.268102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.356 ms 00:28:38.597 [2024-11-18 13:41:34.268110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.268150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.268159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.597 [2024-11-18 13:41:34.268188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:38.597 [2024-11-18 13:41:34.268196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.268655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.268675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.597 [2024-11-18 13:41:34.268688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:28:38.597 [2024-11-18 13:41:34.268702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.268819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.268831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.597 [2024-11-18 13:41:34.268843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:28:38.597 [2024-11-18 13:41:34.268851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.275980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.276019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.597 [2024-11-18 13:41:34.276037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:28:38.597 [2024-11-18 13:41:34.276044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.285449] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:38.597 [2024-11-18 13:41:34.288806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.288850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:38.597 [2024-11-18 13:41:34.288861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.700 ms 00:28:38.597 [2024-11-18 13:41:34.288871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.375604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.375903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:38.597 [2024-11-18 13:41:34.375931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.702 ms 00:28:38.597 [2024-11-18 13:41:34.375947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.376159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.376197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:38.597 [2024-11-18 13:41:34.376207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:28:38.597 [2024-11-18 13:41:34.376217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.382550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.382746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:38.597 [2024-11-18 13:41:34.382768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.289 ms 00:28:38.597 [2024-11-18 13:41:34.382782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.388194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.388249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:38.597 [2024-11-18 13:41:34.388261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:28:38.597 [2024-11-18 13:41:34.388271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.388631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.388646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:38.597 [2024-11-18 13:41:34.388656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:28:38.597 [2024-11-18 13:41:34.388668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.429502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.429709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:38.597 [2024-11-18 13:41:34.429731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.811 ms 00:28:38.597 [2024-11-18 13:41:34.429747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.437024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.437237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:38.597 [2024-11-18 13:41:34.437258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.202 ms 00:28:38.597 [2024-11-18 13:41:34.437269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.443297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.443349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:38.597 [2024-11-18 13:41:34.443360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.909 ms 00:28:38.597 [2024-11-18 13:41:34.443370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.449424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.449479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:38.597 [2024-11-18 13:41:34.449490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.009 ms 00:28:38.597 [2024-11-18 13:41:34.449503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.449553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.449566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:38.597 [2024-11-18 13:41:34.449575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:38.597 [2024-11-18 13:41:34.449594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.449667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.597 [2024-11-18 13:41:34.449680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:38.597 [2024-11-18 13:41:34.449689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:38.597 [2024-11-18 13:41:34.449699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.597 [2024-11-18 13:41:34.450833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2327.417 ms, result 0 00:28:38.597 { 00:28:38.597 "name": "ftl0", 00:28:38.597 "uuid": "f35d27a6-af2a-4f68-9f73-b853ff45a994" 00:28:38.597 } 00:28:38.597 13:41:34 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:38.597 13:41:34 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:38.597 13:41:34 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:38.597 13:41:34 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:38.859 [2024-11-18 13:41:34.888930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.889191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:38.859 [2024-11-18 13:41:34.889223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.859 [2024-11-18 13:41:34.889237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.889275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:38.859 [2024-11-18 13:41:34.890028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.890073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:38.859 [2024-11-18 13:41:34.890086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:28:38.859 [2024-11-18 13:41:34.890098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.890388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.890410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:38.859 [2024-11-18 13:41:34.890420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:28:38.859 [2024-11-18 13:41:34.890438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.893696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.893724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:38.859 [2024-11-18 13:41:34.893738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:28:38.859 [2024-11-18 13:41:34.893748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.900089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.900136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:38.859 [2024-11-18 13:41:34.900148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.323 ms 00:28:38.859 [2024-11-18 13:41:34.900159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.903401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.903624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:38.859 [2024-11-18 13:41:34.903645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:28:38.859 [2024-11-18 13:41:34.903656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.910273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.910505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:38.859 [2024-11-18 13:41:34.910530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.295 ms 00:28:38.859 [2024-11-18 13:41:34.910542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.910678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.910693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:38.859 [2024-11-18 13:41:34.910709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:28:38.859 [2024-11-18 13:41:34.910719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.913726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.913918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:38.859 [2024-11-18 13:41:34.913937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:28:38.859 [2024-11-18 13:41:34.913947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.916895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.916958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:38.859 [2024-11-18 13:41:34.916969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:28:38.859 [2024-11-18 13:41:34.916978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.919286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.919359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:38.859 [2024-11-18 13:41:34.919371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:28:38.859 [2024-11-18 13:41:34.919382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.921575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.859 [2024-11-18 13:41:34.921636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:38.859 [2024-11-18 13:41:34.921647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:28:38.859 [2024-11-18 13:41:34.921656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.859 [2024-11-18 13:41:34.921703] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:38.859 [2024-11-18 13:41:34.921722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:38.859 [2024-11-18 13:41:34.921897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.921995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:38.860 [2024-11-18 13:41:34.922660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:38.860 [2024-11-18 13:41:34.922668] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f35d27a6-af2a-4f68-9f73-b853ff45a994 00:28:38.860 [2024-11-18 13:41:34.922679] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:38.860 [2024-11-18 13:41:34.922686] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:38.860 [2024-11-18 13:41:34.922696] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:38.860 [2024-11-18 13:41:34.922704] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:38.860 [2024-11-18 13:41:34.922713] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:38.860 [2024-11-18 13:41:34.922724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:38.860 [2024-11-18 13:41:34.922735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:38.861 [2024-11-18 13:41:34.922741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:38.861 [2024-11-18 13:41:34.922750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:38.861 [2024-11-18 13:41:34.922758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.861 [2024-11-18 13:41:34.922767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:38.861 [2024-11-18 13:41:34.922776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:28:38.861 [2024-11-18 13:41:34.922790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.925330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.861 [2024-11-18 13:41:34.925376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:38.861 [2024-11-18 13:41:34.925386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:28:38.861 [2024-11-18 13:41:34.925401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.925534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.861 [2024-11-18 13:41:34.925545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:38.861 [2024-11-18 13:41:34.925555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:38.861 [2024-11-18 13:41:34.925566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.933956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.934009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.861 [2024-11-18 13:41:34.934020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.934034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.934101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.934113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.861 [2024-11-18 13:41:34.934121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.934131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.934216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.934233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.861 [2024-11-18 13:41:34.934242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.934252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.934272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.934283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.861 [2024-11-18 13:41:34.934291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.934302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.948144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.948216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.861 [2024-11-18 13:41:34.948229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.948243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.958868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.959083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.861 [2024-11-18 13:41:34.959101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.959113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.959258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.959277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.861 [2024-11-18 13:41:34.959287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.959301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.959348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.959364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.861 [2024-11-18 13:41:34.959372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.959382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.959462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.959475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.861 [2024-11-18 13:41:34.959484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.959497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.959535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.959550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:38.861 [2024-11-18 13:41:34.959558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.959568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.959608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.959623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.861 [2024-11-18 13:41:34.959632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.959644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.959693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.861 [2024-11-18 13:41:34.959708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.861 [2024-11-18 13:41:34.959718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.861 [2024-11-18 13:41:34.959729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.861 [2024-11-18 13:41:34.959870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.907 ms, result 0 00:28:38.861 true 00:28:38.861 13:41:34 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92808 00:28:38.861 13:41:34 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92808 ']' 00:28:38.861 13:41:34 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92808 00:28:39.122 13:41:34 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:39.122 13:41:34 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.122 13:41:34 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92808 00:28:39.122 killing process with pid 92808 00:28:39.122 13:41:35 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.122 13:41:35 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.122 13:41:35 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92808' 00:28:39.122 13:41:35 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 92808 00:28:39.122 13:41:35 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 92808 00:28:44.408 13:41:40 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:47.701 262144+0 records in 00:28:47.701 262144+0 records out 00:28:47.701 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.52312 s, 305 MB/s 00:28:47.701 13:41:43 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:49.608 13:41:45 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:49.608 [2024-11-18 13:41:45.583716] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:28:49.608 [2024-11-18 13:41:45.583866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93002 ] 00:28:49.870 [2024-11-18 13:41:45.749101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.870 [2024-11-18 13:41:45.778212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.870 [2024-11-18 13:41:45.889093] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:49.870 [2024-11-18 13:41:45.889195] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:50.133 [2024-11-18 13:41:46.052887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.133 [2024-11-18 13:41:46.053123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:50.133 [2024-11-18 13:41:46.053150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:50.133 [2024-11-18 13:41:46.053161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.133 [2024-11-18 13:41:46.053263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.133 [2024-11-18 13:41:46.053278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:50.133 [2024-11-18 13:41:46.053288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:50.133 [2024-11-18 13:41:46.053297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.133 [2024-11-18 13:41:46.053327] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:50.133 [2024-11-18 13:41:46.053598] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:50.133 [2024-11-18 13:41:46.053617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.133 [2024-11-18 13:41:46.053627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:50.133 [2024-11-18 13:41:46.053637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:28:50.133 [2024-11-18 13:41:46.053649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.133 [2024-11-18 13:41:46.055432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:50.133 [2024-11-18 13:41:46.059276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.133 [2024-11-18 13:41:46.059468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:50.133 [2024-11-18 13:41:46.059489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:28:50.133 [2024-11-18 13:41:46.059506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.133 [2024-11-18 13:41:46.059578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.133 [2024-11-18 13:41:46.059591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:50.133 [2024-11-18 13:41:46.059605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:50.133 [2024-11-18 13:41:46.059612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.133 [2024-11-18 13:41:46.068325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.133 [2024-11-18 13:41:46.068369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:50.133 [2024-11-18 13:41:46.068384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.669 ms 00:28:50.133 [2024-11-18 13:41:46.068392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.133 [2024-11-18 13:41:46.068499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.133 [2024-11-18 13:41:46.068509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:50.133 [2024-11-18 13:41:46.068519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:50.133 [2024-11-18 13:41:46.068529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.134 [2024-11-18 13:41:46.068589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.134 [2024-11-18 13:41:46.068599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:50.134 [2024-11-18 13:41:46.068608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:50.134 [2024-11-18 13:41:46.068621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.134 [2024-11-18 13:41:46.068646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:50.134 [2024-11-18 13:41:46.070723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.134 [2024-11-18 13:41:46.070897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:50.134 [2024-11-18 13:41:46.070914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:28:50.134 [2024-11-18 13:41:46.070923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.134 [2024-11-18 13:41:46.070970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.134 [2024-11-18 13:41:46.070979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:50.134 [2024-11-18 13:41:46.070991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:50.134 [2024-11-18 13:41:46.070999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.134 [2024-11-18 13:41:46.071028] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:50.134 [2024-11-18 13:41:46.071049] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:50.134 [2024-11-18 13:41:46.071087] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:50.134 [2024-11-18 13:41:46.071105] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:50.134 [2024-11-18 13:41:46.071249] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:50.134 [2024-11-18 13:41:46.071262] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:50.134 [2024-11-18 13:41:46.071274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:50.134 [2024-11-18 13:41:46.071288] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071298] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071307] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:50.134 [2024-11-18 13:41:46.071315] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:50.134 [2024-11-18 13:41:46.071323] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:50.134 [2024-11-18 13:41:46.071331] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:50.134 [2024-11-18 13:41:46.071340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.134 [2024-11-18 13:41:46.071348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:50.134 [2024-11-18 13:41:46.071356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:28:50.134 [2024-11-18 13:41:46.071366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.134 [2024-11-18 13:41:46.071449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.134 [2024-11-18 13:41:46.071461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:50.134 [2024-11-18 13:41:46.071470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:50.134 [2024-11-18 13:41:46.071479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.134 [2024-11-18 13:41:46.071583] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:50.134 [2024-11-18 13:41:46.071595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:50.134 [2024-11-18 13:41:46.071604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:50.134 [2024-11-18 13:41:46.071644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:50.134 [2024-11-18 13:41:46.071670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:50.134 [2024-11-18 13:41:46.071689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:50.134 [2024-11-18 13:41:46.071697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:50.134 [2024-11-18 13:41:46.071704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:50.134 [2024-11-18 13:41:46.071711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:50.134 [2024-11-18 13:41:46.071720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:50.134 [2024-11-18 13:41:46.071728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:50.134 [2024-11-18 13:41:46.071745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:50.134 [2024-11-18 13:41:46.071770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:50.134 [2024-11-18 13:41:46.071794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:50.134 [2024-11-18 13:41:46.071824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:50.134 [2024-11-18 13:41:46.071848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:50.134 [2024-11-18 13:41:46.071869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:50.134 [2024-11-18 13:41:46.071882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:50.134 [2024-11-18 13:41:46.071889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:50.134 [2024-11-18 13:41:46.071895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:50.134 [2024-11-18 13:41:46.071902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:50.134 [2024-11-18 13:41:46.071908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:50.134 [2024-11-18 13:41:46.071915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:50.134 [2024-11-18 13:41:46.071928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:50.134 [2024-11-18 13:41:46.071937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071944] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:50.134 [2024-11-18 13:41:46.071952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:50.134 [2024-11-18 13:41:46.071962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:50.134 [2024-11-18 13:41:46.071970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:50.134 [2024-11-18 13:41:46.071978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:50.134 [2024-11-18 13:41:46.071987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:50.134 [2024-11-18 13:41:46.071994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:50.134 [2024-11-18 13:41:46.072002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:50.134 [2024-11-18 13:41:46.072009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:50.134 [2024-11-18 13:41:46.072016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:50.134 [2024-11-18 13:41:46.072024] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:50.134 [2024-11-18 13:41:46.072034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:50.134 [2024-11-18 13:41:46.072043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:50.134 [2024-11-18 13:41:46.072050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:50.134 [2024-11-18 13:41:46.072058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:50.134 [2024-11-18 13:41:46.072067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:50.134 [2024-11-18 13:41:46.072074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:50.134 [2024-11-18 13:41:46.072081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:50.134 [2024-11-18 13:41:46.072089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:50.134 [2024-11-18 13:41:46.072096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:50.134 [2024-11-18 13:41:46.072103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:50.134 [2024-11-18 13:41:46.072110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:50.134 [2024-11-18 13:41:46.072117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:50.135 [2024-11-18 13:41:46.072123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:50.135 [2024-11-18 13:41:46.072131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:50.135 [2024-11-18 13:41:46.072139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:50.135 [2024-11-18 13:41:46.072146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:50.135 [2024-11-18 13:41:46.072154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:50.135 [2024-11-18 13:41:46.072197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:50.135 [2024-11-18 13:41:46.072206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:50.135 [2024-11-18 13:41:46.072214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:50.135 [2024-11-18 13:41:46.072225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:50.135 [2024-11-18 13:41:46.072233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.072241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:50.135 [2024-11-18 13:41:46.072249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:28:50.135 [2024-11-18 13:41:46.072265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.087096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.087281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:50.135 [2024-11-18 13:41:46.087343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.778 ms 00:28:50.135 [2024-11-18 13:41:46.087368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.087472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.087495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:50.135 [2024-11-18 13:41:46.087516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:50.135 [2024-11-18 13:41:46.087544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.109705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.109836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:50.135 [2024-11-18 13:41:46.109881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.088 ms 00:28:50.135 [2024-11-18 13:41:46.109911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.109991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.110026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:50.135 [2024-11-18 13:41:46.110056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:50.135 [2024-11-18 13:41:46.110090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.110791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.110984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:50.135 [2024-11-18 13:41:46.111065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:28:50.135 [2024-11-18 13:41:46.111112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.111378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.111611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:50.135 [2024-11-18 13:41:46.111647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:28:50.135 [2024-11-18 13:41:46.111674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.120413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.120581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:50.135 [2024-11-18 13:41:46.120652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.687 ms 00:28:50.135 [2024-11-18 13:41:46.121151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.124970] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:50.135 [2024-11-18 13:41:46.125180] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:50.135 [2024-11-18 13:41:46.125253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.125278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:50.135 [2024-11-18 13:41:46.125300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 00:28:50.135 [2024-11-18 13:41:46.125319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.141254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.141421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:50.135 [2024-11-18 13:41:46.141493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.874 ms 00:28:50.135 [2024-11-18 13:41:46.141524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.144611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.144770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:50.135 [2024-11-18 13:41:46.144826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:28:50.135 [2024-11-18 13:41:46.144850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.147631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.147788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:50.135 [2024-11-18 13:41:46.147843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.733 ms 00:28:50.135 [2024-11-18 13:41:46.147866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.148261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.148307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:50.135 [2024-11-18 13:41:46.148386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:28:50.135 [2024-11-18 13:41:46.148409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.174153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.174367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:50.135 [2024-11-18 13:41:46.174428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.707 ms 00:28:50.135 [2024-11-18 13:41:46.174451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.182644] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:50.135 [2024-11-18 13:41:46.186108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.186288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:50.135 [2024-11-18 13:41:46.186319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.602 ms 00:28:50.135 [2024-11-18 13:41:46.186331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.186427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.186442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:50.135 [2024-11-18 13:41:46.186456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:50.135 [2024-11-18 13:41:46.186464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.186534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.186545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:50.135 [2024-11-18 13:41:46.186553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:50.135 [2024-11-18 13:41:46.186564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.186589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.186598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:50.135 [2024-11-18 13:41:46.186606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:50.135 [2024-11-18 13:41:46.186614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.186655] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:50.135 [2024-11-18 13:41:46.186665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.186677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:50.135 [2024-11-18 13:41:46.186689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:50.135 [2024-11-18 13:41:46.186697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.192246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.192291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:50.135 [2024-11-18 13:41:46.192303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:28:50.135 [2024-11-18 13:41:46.192312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.192403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.135 [2024-11-18 13:41:46.192414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:50.135 [2024-11-18 13:41:46.192426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:50.135 [2024-11-18 13:41:46.192435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.135 [2024-11-18 13:41:46.193592] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 140.224 ms, result 0 00:28:51.079  [2024-11-18T13:41:48.596Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-18T13:41:49.540Z] Copying: 31/1024 [MB] (12 MBps) [2024-11-18T13:41:50.482Z] Copying: 43/1024 [MB] (12 MBps) [2024-11-18T13:41:51.423Z] Copying: 66/1024 [MB] (22 MBps) [2024-11-18T13:41:52.364Z] Copying: 85/1024 [MB] (19 MBps) [2024-11-18T13:41:53.307Z] Copying: 104/1024 [MB] (18 MBps) [2024-11-18T13:41:54.244Z] Copying: 126/1024 [MB] (22 MBps) [2024-11-18T13:41:55.629Z] Copying: 171/1024 [MB] (45 MBps) [2024-11-18T13:41:56.571Z] Copying: 185/1024 [MB] (13 MBps) [2024-11-18T13:41:57.511Z] Copying: 199/1024 [MB] (13 MBps) [2024-11-18T13:41:58.452Z] Copying: 221/1024 [MB] (22 MBps) [2024-11-18T13:41:59.394Z] Copying: 241/1024 [MB] (20 MBps) [2024-11-18T13:42:00.365Z] Copying: 257/1024 [MB] (15 MBps) [2024-11-18T13:42:01.308Z] Copying: 282/1024 [MB] (25 MBps) [2024-11-18T13:42:02.255Z] Copying: 297/1024 [MB] (14 MBps) [2024-11-18T13:42:03.645Z] Copying: 311/1024 [MB] (14 MBps) [2024-11-18T13:42:04.218Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-18T13:42:05.606Z] Copying: 332/1024 [MB] (10 MBps) [2024-11-18T13:42:06.552Z] Copying: 350328/1048576 [kB] (10224 kBps) [2024-11-18T13:42:07.496Z] Copying: 352/1024 [MB] (10 MBps) [2024-11-18T13:42:08.440Z] Copying: 362/1024 [MB] (10 MBps) [2024-11-18T13:42:09.384Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-18T13:42:10.329Z] Copying: 383/1024 [MB] (10 MBps) [2024-11-18T13:42:11.277Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-18T13:42:12.223Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-18T13:42:13.612Z] Copying: 419/1024 [MB] (14 MBps) [2024-11-18T13:42:14.558Z] Copying: 435/1024 [MB] (15 MBps) [2024-11-18T13:42:15.503Z] Copying: 451/1024 [MB] (16 MBps) [2024-11-18T13:42:16.449Z] Copying: 462/1024 [MB] (10 MBps) [2024-11-18T13:42:17.394Z] Copying: 472/1024 [MB] (10 MBps) [2024-11-18T13:42:18.339Z] Copying: 493952/1048576 [kB] (10208 kBps) [2024-11-18T13:42:19.283Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-18T13:42:20.224Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-18T13:42:21.612Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-18T13:42:22.557Z] Copying: 525/1024 [MB] (11 MBps) [2024-11-18T13:42:23.502Z] Copying: 536/1024 [MB] (11 MBps) [2024-11-18T13:42:24.446Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-18T13:42:25.393Z] Copying: 559/1024 [MB] (11 MBps) [2024-11-18T13:42:26.337Z] Copying: 570/1024 [MB] (11 MBps) [2024-11-18T13:42:27.283Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-18T13:42:28.255Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-18T13:42:29.264Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-18T13:42:30.206Z] Copying: 613/1024 [MB] (11 MBps) [2024-11-18T13:42:31.592Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-18T13:42:32.536Z] Copying: 635/1024 [MB] (10 MBps) [2024-11-18T13:42:33.479Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-18T13:42:34.424Z] Copying: 658/1024 [MB] (11 MBps) [2024-11-18T13:42:35.369Z] Copying: 670/1024 [MB] (11 MBps) [2024-11-18T13:42:36.313Z] Copying: 681/1024 [MB] (11 MBps) [2024-11-18T13:42:37.257Z] Copying: 693/1024 [MB] (11 MBps) [2024-11-18T13:42:38.645Z] Copying: 704/1024 [MB] (11 MBps) [2024-11-18T13:42:39.218Z] Copying: 715/1024 [MB] (11 MBps) [2024-11-18T13:42:40.605Z] Copying: 727/1024 [MB] (11 MBps) [2024-11-18T13:42:41.550Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-18T13:42:42.494Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-18T13:42:43.437Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-18T13:42:44.382Z] Copying: 771/1024 [MB] (11 MBps) [2024-11-18T13:42:45.326Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-18T13:42:46.271Z] Copying: 793/1024 [MB] (11 MBps) [2024-11-18T13:42:47.213Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-18T13:42:48.604Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-18T13:42:49.547Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-18T13:42:50.492Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-18T13:42:51.438Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-18T13:42:52.383Z] Copying: 857/1024 [MB] (10 MBps) [2024-11-18T13:42:53.329Z] Copying: 867/1024 [MB] (10 MBps) [2024-11-18T13:42:54.263Z] Copying: 878/1024 [MB] (10 MBps) [2024-11-18T13:42:55.642Z] Copying: 917/1024 [MB] (39 MBps) [2024-11-18T13:42:56.215Z] Copying: 967/1024 [MB] (50 MBps) [2024-11-18T13:42:57.613Z] Copying: 987/1024 [MB] (19 MBps) [2024-11-18T13:42:58.571Z] Copying: 1002/1024 [MB] (15 MBps) [2024-11-18T13:42:59.148Z] Copying: 1016/1024 [MB] (14 MBps) [2024-11-18T13:42:59.148Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 13:42:58.844911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.020 [2024-11-18 13:42:58.845103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:03.020 [2024-11-18 13:42:58.845233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:03.020 [2024-11-18 13:42:58.845268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.020 [2024-11-18 13:42:58.845328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:03.020 [2024-11-18 13:42:58.846360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.020 [2024-11-18 13:42:58.846550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:03.020 [2024-11-18 13:42:58.846668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:30:03.020 [2024-11-18 13:42:58.846700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.020 [2024-11-18 13:42:58.849667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.020 [2024-11-18 13:42:58.849849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:03.020 [2024-11-18 13:42:58.849969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:30:03.020 [2024-11-18 13:42:58.850001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.020 [2024-11-18 13:42:58.850059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.020 [2024-11-18 13:42:58.850082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:03.020 [2024-11-18 13:42:58.850187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:03.020 [2024-11-18 13:42:58.850216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.020 [2024-11-18 13:42:58.850292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.020 [2024-11-18 13:42:58.850317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:03.020 [2024-11-18 13:42:58.850339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:03.020 [2024-11-18 13:42:58.850416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.020 [2024-11-18 13:42:58.850463] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:03.020 [2024-11-18 13:42:58.850492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.850981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:03.020 [2024-11-18 13:42:58.851915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.851990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:03.021 [2024-11-18 13:42:58.852348] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:03.021 [2024-11-18 13:42:58.852356] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f35d27a6-af2a-4f68-9f73-b853ff45a994 00:30:03.021 [2024-11-18 13:42:58.852364] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:03.021 [2024-11-18 13:42:58.852372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:03.021 [2024-11-18 13:42:58.852379] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:03.021 [2024-11-18 13:42:58.852388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:03.021 [2024-11-18 13:42:58.852403] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:03.021 [2024-11-18 13:42:58.852412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:03.021 [2024-11-18 13:42:58.852420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:03.021 [2024-11-18 13:42:58.852427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:03.021 [2024-11-18 13:42:58.852433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:03.021 [2024-11-18 13:42:58.852441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.021 [2024-11-18 13:42:58.852449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:03.021 [2024-11-18 13:42:58.852463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.979 ms 00:30:03.021 [2024-11-18 13:42:58.852473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.855126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.021 [2024-11-18 13:42:58.855371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:03.021 [2024-11-18 13:42:58.855691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:30:03.021 [2024-11-18 13:42:58.855761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.855919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.021 [2024-11-18 13:42:58.856024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:03.021 [2024-11-18 13:42:58.856072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:30:03.021 [2024-11-18 13:42:58.856092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.863926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.021 [2024-11-18 13:42:58.864117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:03.021 [2024-11-18 13:42:58.864221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.021 [2024-11-18 13:42:58.864252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.864349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.021 [2024-11-18 13:42:58.864546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:03.021 [2024-11-18 13:42:58.864582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.021 [2024-11-18 13:42:58.864602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.864690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.021 [2024-11-18 13:42:58.864716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:03.021 [2024-11-18 13:42:58.864736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.021 [2024-11-18 13:42:58.864756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.864788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.021 [2024-11-18 13:42:58.864809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:03.021 [2024-11-18 13:42:58.864829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.021 [2024-11-18 13:42:58.864927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.878965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.021 [2024-11-18 13:42:58.879207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:03.021 [2024-11-18 13:42:58.879349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.021 [2024-11-18 13:42:58.879382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.021 [2024-11-18 13:42:58.889660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.021 [2024-11-18 13:42:58.889843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:03.022 [2024-11-18 13:42:58.889901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.022 [2024-11-18 13:42:58.889933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.022 [2024-11-18 13:42:58.889993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.022 [2024-11-18 13:42:58.890016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:03.022 [2024-11-18 13:42:58.890037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.022 [2024-11-18 13:42:58.890056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.022 [2024-11-18 13:42:58.890110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.022 [2024-11-18 13:42:58.890132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:03.022 [2024-11-18 13:42:58.890153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.022 [2024-11-18 13:42:58.890230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.022 [2024-11-18 13:42:58.890313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.022 [2024-11-18 13:42:58.890337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:03.022 [2024-11-18 13:42:58.890359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.022 [2024-11-18 13:42:58.890380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.022 [2024-11-18 13:42:58.890469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.022 [2024-11-18 13:42:58.890496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:03.022 [2024-11-18 13:42:58.890517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.022 [2024-11-18 13:42:58.890536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.022 [2024-11-18 13:42:58.890591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.022 [2024-11-18 13:42:58.890614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:03.022 [2024-11-18 13:42:58.890666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.022 [2024-11-18 13:42:58.890689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.022 [2024-11-18 13:42:58.890752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.022 [2024-11-18 13:42:58.890776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:03.022 [2024-11-18 13:42:58.890795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.022 [2024-11-18 13:42:58.890849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.022 [2024-11-18 13:42:58.891011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 46.056 ms, result 0 00:30:03.591 00:30:03.592 00:30:03.592 13:42:59 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:03.592 [2024-11-18 13:42:59.538988] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:30:03.592 [2024-11-18 13:42:59.539119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93748 ] 00:30:03.592 [2024-11-18 13:42:59.696116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.853 [2024-11-18 13:42:59.720650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.853 [2024-11-18 13:42:59.807549] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:03.853 [2024-11-18 13:42:59.807603] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:03.853 [2024-11-18 13:42:59.954071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.954107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:03.853 [2024-11-18 13:42:59.954118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:03.853 [2024-11-18 13:42:59.954124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.954157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.954164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:03.853 [2024-11-18 13:42:59.954187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:03.853 [2024-11-18 13:42:59.954193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.954207] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:03.853 [2024-11-18 13:42:59.954379] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:03.853 [2024-11-18 13:42:59.954389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.954398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:03.853 [2024-11-18 13:42:59.954405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:30:03.853 [2024-11-18 13:42:59.954412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.954888] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:03.853 [2024-11-18 13:42:59.954923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.954930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:03.853 [2024-11-18 13:42:59.954937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:03.853 [2024-11-18 13:42:59.954943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.955010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.955020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:03.853 [2024-11-18 13:42:59.955026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:03.853 [2024-11-18 13:42:59.955032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.955273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.955282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:03.853 [2024-11-18 13:42:59.955291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:30:03.853 [2024-11-18 13:42:59.955297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.955357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.955364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:03.853 [2024-11-18 13:42:59.955369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:03.853 [2024-11-18 13:42:59.955375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.955391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.955397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:03.853 [2024-11-18 13:42:59.955403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:03.853 [2024-11-18 13:42:59.955409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.955422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:03.853 [2024-11-18 13:42:59.956648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.956774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:03.853 [2024-11-18 13:42:59.956786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:30:03.853 [2024-11-18 13:42:59.956793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.956819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.853 [2024-11-18 13:42:59.956825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:03.853 [2024-11-18 13:42:59.956831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:03.853 [2024-11-18 13:42:59.956837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.853 [2024-11-18 13:42:59.956849] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:03.853 [2024-11-18 13:42:59.956868] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:03.853 [2024-11-18 13:42:59.956894] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:03.853 [2024-11-18 13:42:59.956907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:03.853 [2024-11-18 13:42:59.956985] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:03.853 [2024-11-18 13:42:59.956993] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:03.853 [2024-11-18 13:42:59.957001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:03.853 [2024-11-18 13:42:59.957008] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957016] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957024] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:03.854 [2024-11-18 13:42:59.957029] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:03.854 [2024-11-18 13:42:59.957035] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:03.854 [2024-11-18 13:42:59.957040] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:03.854 [2024-11-18 13:42:59.957046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.854 [2024-11-18 13:42:59.957051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:03.854 [2024-11-18 13:42:59.957057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:30:03.854 [2024-11-18 13:42:59.957062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.854 [2024-11-18 13:42:59.957124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.854 [2024-11-18 13:42:59.957130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:03.854 [2024-11-18 13:42:59.957137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:03.854 [2024-11-18 13:42:59.957143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.854 [2024-11-18 13:42:59.957231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:03.854 [2024-11-18 13:42:59.957240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:03.854 [2024-11-18 13:42:59.957246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:03.854 [2024-11-18 13:42:59.957264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:03.854 [2024-11-18 13:42:59.957285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:03.854 [2024-11-18 13:42:59.957295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:03.854 [2024-11-18 13:42:59.957301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:03.854 [2024-11-18 13:42:59.957306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:03.854 [2024-11-18 13:42:59.957311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:03.854 [2024-11-18 13:42:59.957316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:03.854 [2024-11-18 13:42:59.957321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:03.854 [2024-11-18 13:42:59.957331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:03.854 [2024-11-18 13:42:59.957349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:03.854 [2024-11-18 13:42:59.957366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:03.854 [2024-11-18 13:42:59.957383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:03.854 [2024-11-18 13:42:59.957400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:03.854 [2024-11-18 13:42:59.957416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:03.854 [2024-11-18 13:42:59.957428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:03.854 [2024-11-18 13:42:59.957436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:03.854 [2024-11-18 13:42:59.957442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:03.854 [2024-11-18 13:42:59.957447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:03.854 [2024-11-18 13:42:59.957453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:03.854 [2024-11-18 13:42:59.957458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:03.854 [2024-11-18 13:42:59.957470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:03.854 [2024-11-18 13:42:59.957476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957483] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:03.854 [2024-11-18 13:42:59.957490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:03.854 [2024-11-18 13:42:59.957496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.854 [2024-11-18 13:42:59.957514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:03.854 [2024-11-18 13:42:59.957520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:03.854 [2024-11-18 13:42:59.957526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:03.854 [2024-11-18 13:42:59.957532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:03.854 [2024-11-18 13:42:59.957539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:03.854 [2024-11-18 13:42:59.957544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:03.854 [2024-11-18 13:42:59.957551] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:03.854 [2024-11-18 13:42:59.957559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:03.854 [2024-11-18 13:42:59.957566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:03.854 [2024-11-18 13:42:59.957572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:03.854 [2024-11-18 13:42:59.957578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:03.854 [2024-11-18 13:42:59.957584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:03.854 [2024-11-18 13:42:59.957591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:03.854 [2024-11-18 13:42:59.957596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:03.854 [2024-11-18 13:42:59.957602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:03.854 [2024-11-18 13:42:59.957609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:03.854 [2024-11-18 13:42:59.957615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:03.854 [2024-11-18 13:42:59.957621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:03.854 [2024-11-18 13:42:59.957627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:03.854 [2024-11-18 13:42:59.957633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:03.854 [2024-11-18 13:42:59.957641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:03.854 [2024-11-18 13:42:59.957647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:03.854 [2024-11-18 13:42:59.957653] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:03.854 [2024-11-18 13:42:59.957660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:03.854 [2024-11-18 13:42:59.957667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:03.854 [2024-11-18 13:42:59.957674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:03.854 [2024-11-18 13:42:59.957680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:03.854 [2024-11-18 13:42:59.957686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:03.854 [2024-11-18 13:42:59.957693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.854 [2024-11-18 13:42:59.957702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:03.854 [2024-11-18 13:42:59.957708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:30:03.854 [2024-11-18 13:42:59.957714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.854 [2024-11-18 13:42:59.963197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.854 [2024-11-18 13:42:59.963213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:03.854 [2024-11-18 13:42:59.963221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.443 ms 00:30:03.854 [2024-11-18 13:42:59.963227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.854 [2024-11-18 13:42:59.963288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.854 [2024-11-18 13:42:59.963294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:03.854 [2024-11-18 13:42:59.963300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:03.855 [2024-11-18 13:42:59.963306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.116 [2024-11-18 13:42:59.979714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.116 [2024-11-18 13:42:59.979844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:04.116 [2024-11-18 13:42:59.979901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.372 ms 00:30:04.117 [2024-11-18 13:42:59.979926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.979966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.979988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:04.117 [2024-11-18 13:42:59.980008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:04.117 [2024-11-18 13:42:59.980027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.980140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.980191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:04.117 [2024-11-18 13:42:59.980213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:04.117 [2024-11-18 13:42:59.980233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.980359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.980390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:04.117 [2024-11-18 13:42:59.980411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:30:04.117 [2024-11-18 13:42:59.980477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.985488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.985600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:04.117 [2024-11-18 13:42:59.985666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.976 ms 00:30:04.117 [2024-11-18 13:42:59.985693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.985863] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:04.117 [2024-11-18 13:42:59.986039] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:04.117 [2024-11-18 13:42:59.986111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.986327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:04.117 [2024-11-18 13:42:59.986381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:30:04.117 [2024-11-18 13:42:59.986398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.999025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.999104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:04.117 [2024-11-18 13:42:59.999145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.604 ms 00:30:04.117 [2024-11-18 13:42:59.999184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.999301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.999321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:04.117 [2024-11-18 13:42:59.999657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:04.117 [2024-11-18 13:42:59.999758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:42:59.999827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:42:59.999872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:04.117 [2024-11-18 13:42:59.999947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:04.117 [2024-11-18 13:42:59.999966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.000241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.000313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:04.117 [2024-11-18 13:43:00.000360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:30:04.117 [2024-11-18 13:43:00.000377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.000435] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:04.117 [2024-11-18 13:43:00.000445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.000453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:04.117 [2024-11-18 13:43:00.000459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:04.117 [2024-11-18 13:43:00.000464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.006857] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:04.117 [2024-11-18 13:43:00.007037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.007048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:04.117 [2024-11-18 13:43:00.007056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.558 ms 00:30:04.117 [2024-11-18 13:43:00.007062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.008879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.008902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:04.117 [2024-11-18 13:43:00.008910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:30:04.117 [2024-11-18 13:43:00.008916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.008973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.008984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:04.117 [2024-11-18 13:43:00.008990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:04.117 [2024-11-18 13:43:00.008997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.009013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.009019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:04.117 [2024-11-18 13:43:00.009025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:04.117 [2024-11-18 13:43:00.009030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.009053] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:04.117 [2024-11-18 13:43:00.009060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.009065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:04.117 [2024-11-18 13:43:00.009073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:04.117 [2024-11-18 13:43:00.009078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.012218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.012250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:04.117 [2024-11-18 13:43:00.012258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:30:04.117 [2024-11-18 13:43:00.012264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.012317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.117 [2024-11-18 13:43:00.012324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:04.117 [2024-11-18 13:43:00.012330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:04.117 [2024-11-18 13:43:00.012336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.117 [2024-11-18 13:43:00.013404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 59.012 ms, result 0 00:30:05.059  [2024-11-18T13:43:02.575Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-18T13:43:03.518Z] Copying: 36/1024 [MB] (17 MBps) [2024-11-18T13:43:04.460Z] Copying: 58/1024 [MB] (22 MBps) [2024-11-18T13:43:05.400Z] Copying: 76/1024 [MB] (18 MBps) [2024-11-18T13:43:06.340Z] Copying: 97/1024 [MB] (20 MBps) [2024-11-18T13:43:07.281Z] Copying: 118/1024 [MB] (21 MBps) [2024-11-18T13:43:08.225Z] Copying: 132/1024 [MB] (14 MBps) [2024-11-18T13:43:09.256Z] Copying: 160/1024 [MB] (28 MBps) [2024-11-18T13:43:10.200Z] Copying: 172/1024 [MB] (11 MBps) [2024-11-18T13:43:11.589Z] Copying: 190/1024 [MB] (17 MBps) [2024-11-18T13:43:12.158Z] Copying: 208/1024 [MB] (18 MBps) [2024-11-18T13:43:13.545Z] Copying: 233/1024 [MB] (24 MBps) [2024-11-18T13:43:14.491Z] Copying: 252/1024 [MB] (19 MBps) [2024-11-18T13:43:15.436Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-18T13:43:16.382Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-18T13:43:17.328Z] Copying: 284/1024 [MB] (10 MBps) [2024-11-18T13:43:18.273Z] Copying: 294/1024 [MB] (10 MBps) [2024-11-18T13:43:19.217Z] Copying: 305/1024 [MB] (10 MBps) [2024-11-18T13:43:20.161Z] Copying: 315/1024 [MB] (10 MBps) [2024-11-18T13:43:21.551Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-18T13:43:22.496Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-18T13:43:23.439Z] Copying: 348/1024 [MB] (10 MBps) [2024-11-18T13:43:24.384Z] Copying: 358/1024 [MB] (10 MBps) [2024-11-18T13:43:25.330Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-18T13:43:26.273Z] Copying: 381/1024 [MB] (11 MBps) [2024-11-18T13:43:27.218Z] Copying: 403/1024 [MB] (22 MBps) [2024-11-18T13:43:28.163Z] Copying: 417/1024 [MB] (13 MBps) [2024-11-18T13:43:29.551Z] Copying: 438/1024 [MB] (21 MBps) [2024-11-18T13:43:30.497Z] Copying: 457/1024 [MB] (18 MBps) [2024-11-18T13:43:31.441Z] Copying: 473/1024 [MB] (16 MBps) [2024-11-18T13:43:32.384Z] Copying: 495/1024 [MB] (22 MBps) [2024-11-18T13:43:33.324Z] Copying: 515/1024 [MB] (19 MBps) [2024-11-18T13:43:34.268Z] Copying: 538/1024 [MB] (22 MBps) [2024-11-18T13:43:35.214Z] Copying: 555/1024 [MB] (17 MBps) [2024-11-18T13:43:36.161Z] Copying: 571/1024 [MB] (16 MBps) [2024-11-18T13:43:37.552Z] Copying: 586/1024 [MB] (14 MBps) [2024-11-18T13:43:38.499Z] Copying: 603/1024 [MB] (17 MBps) [2024-11-18T13:43:39.443Z] Copying: 624/1024 [MB] (21 MBps) [2024-11-18T13:43:40.388Z] Copying: 642/1024 [MB] (17 MBps) [2024-11-18T13:43:41.335Z] Copying: 652/1024 [MB] (10 MBps) [2024-11-18T13:43:42.283Z] Copying: 668/1024 [MB] (15 MBps) [2024-11-18T13:43:43.229Z] Copying: 684/1024 [MB] (16 MBps) [2024-11-18T13:43:44.176Z] Copying: 704/1024 [MB] (19 MBps) [2024-11-18T13:43:45.568Z] Copying: 715/1024 [MB] (10 MBps) [2024-11-18T13:43:46.516Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-18T13:43:47.465Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-18T13:43:48.411Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-18T13:43:49.372Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-18T13:43:50.315Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-18T13:43:51.258Z] Copying: 778/1024 [MB] (10 MBps) [2024-11-18T13:43:52.201Z] Copying: 799/1024 [MB] (20 MBps) [2024-11-18T13:43:53.170Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-18T13:43:54.166Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-18T13:43:55.556Z] Copying: 832/1024 [MB] (10 MBps) [2024-11-18T13:43:56.501Z] Copying: 843/1024 [MB] (10 MBps) [2024-11-18T13:43:57.445Z] Copying: 856/1024 [MB] (13 MBps) [2024-11-18T13:43:58.389Z] Copying: 869/1024 [MB] (12 MBps) [2024-11-18T13:43:59.337Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-18T13:44:00.278Z] Copying: 895/1024 [MB] (14 MBps) [2024-11-18T13:44:01.221Z] Copying: 916/1024 [MB] (20 MBps) [2024-11-18T13:44:02.167Z] Copying: 936/1024 [MB] (20 MBps) [2024-11-18T13:44:03.555Z] Copying: 951/1024 [MB] (15 MBps) [2024-11-18T13:44:04.499Z] Copying: 972/1024 [MB] (20 MBps) [2024-11-18T13:44:05.444Z] Copying: 989/1024 [MB] (17 MBps) [2024-11-18T13:44:06.389Z] Copying: 1002/1024 [MB] (13 MBps) [2024-11-18T13:44:06.389Z] Copying: 1022/1024 [MB] (19 MBps) [2024-11-18T13:44:06.652Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 13:44:06.609980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.524 [2024-11-18 13:44:06.610487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:10.524 [2024-11-18 13:44:06.610546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:10.524 [2024-11-18 13:44:06.610565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.524 [2024-11-18 13:44:06.610628] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:10.524 [2024-11-18 13:44:06.611641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.524 [2024-11-18 13:44:06.611711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:10.524 [2024-11-18 13:44:06.611734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:31:10.524 [2024-11-18 13:44:06.611752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.524 [2024-11-18 13:44:06.612061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.524 [2024-11-18 13:44:06.612072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:10.524 [2024-11-18 13:44:06.612082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:31:10.524 [2024-11-18 13:44:06.612091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.524 [2024-11-18 13:44:06.612124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.524 [2024-11-18 13:44:06.612135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:10.524 [2024-11-18 13:44:06.612149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:10.524 [2024-11-18 13:44:06.612157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.524 [2024-11-18 13:44:06.612235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.524 [2024-11-18 13:44:06.612247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:10.524 [2024-11-18 13:44:06.612257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:10.524 [2024-11-18 13:44:06.612269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.524 [2024-11-18 13:44:06.612283] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:10.524 [2024-11-18 13:44:06.612297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:10.524 [2024-11-18 13:44:06.612310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:10.524 [2024-11-18 13:44:06.612318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:10.524 [2024-11-18 13:44:06.612327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.524 [2024-11-18 13:44:06.612335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.524 [2024-11-18 13:44:06.612347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.612999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:10.525 [2024-11-18 13:44:06.613393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:10.526 [2024-11-18 13:44:06.613583] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:10.526 [2024-11-18 13:44:06.613592] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f35d27a6-af2a-4f68-9f73-b853ff45a994 00:31:10.526 [2024-11-18 13:44:06.613600] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:10.526 [2024-11-18 13:44:06.613613] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:10.526 [2024-11-18 13:44:06.613621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:10.526 [2024-11-18 13:44:06.613634] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:10.526 [2024-11-18 13:44:06.613641] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:10.526 [2024-11-18 13:44:06.613650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:10.526 [2024-11-18 13:44:06.613658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:10.526 [2024-11-18 13:44:06.613665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:10.526 [2024-11-18 13:44:06.613672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:10.526 [2024-11-18 13:44:06.613679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.526 [2024-11-18 13:44:06.613693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:10.526 [2024-11-18 13:44:06.613702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:31:10.526 [2024-11-18 13:44:06.613712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.526 [2024-11-18 13:44:06.616224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.526 [2024-11-18 13:44:06.616263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:10.526 [2024-11-18 13:44:06.616275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:31:10.526 [2024-11-18 13:44:06.616284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.526 [2024-11-18 13:44:06.616412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.526 [2024-11-18 13:44:06.616422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:10.526 [2024-11-18 13:44:06.616434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:10.526 [2024-11-18 13:44:06.616442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.526 [2024-11-18 13:44:06.625262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.526 [2024-11-18 13:44:06.625313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.526 [2024-11-18 13:44:06.625325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.526 [2024-11-18 13:44:06.625334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.526 [2024-11-18 13:44:06.625407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.526 [2024-11-18 13:44:06.625417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.526 [2024-11-18 13:44:06.625433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.526 [2024-11-18 13:44:06.625441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.526 [2024-11-18 13:44:06.625509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.526 [2024-11-18 13:44:06.625520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.526 [2024-11-18 13:44:06.625529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.526 [2024-11-18 13:44:06.625537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.526 [2024-11-18 13:44:06.625555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.526 [2024-11-18 13:44:06.625563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.526 [2024-11-18 13:44:06.625572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.526 [2024-11-18 13:44:06.625585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.526 [2024-11-18 13:44:06.640438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.526 [2024-11-18 13:44:06.640492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.526 [2024-11-18 13:44:06.640503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.526 [2024-11-18 13:44:06.640512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.788 [2024-11-18 13:44:06.652339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.788 [2024-11-18 13:44:06.652390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.788 [2024-11-18 13:44:06.652410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.788 [2024-11-18 13:44:06.652418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.788 [2024-11-18 13:44:06.652467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.789 [2024-11-18 13:44:06.652478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:10.789 [2024-11-18 13:44:06.652487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.789 [2024-11-18 13:44:06.652496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.789 [2024-11-18 13:44:06.652532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.789 [2024-11-18 13:44:06.652541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:10.789 [2024-11-18 13:44:06.652549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.789 [2024-11-18 13:44:06.652557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.789 [2024-11-18 13:44:06.652616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.789 [2024-11-18 13:44:06.652634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:10.789 [2024-11-18 13:44:06.652643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.789 [2024-11-18 13:44:06.652655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.789 [2024-11-18 13:44:06.652682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.789 [2024-11-18 13:44:06.652691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:10.789 [2024-11-18 13:44:06.652700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.789 [2024-11-18 13:44:06.652707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.789 [2024-11-18 13:44:06.652752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.789 [2024-11-18 13:44:06.652761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:10.789 [2024-11-18 13:44:06.652770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.789 [2024-11-18 13:44:06.652777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.789 [2024-11-18 13:44:06.652821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.789 [2024-11-18 13:44:06.652832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:10.789 [2024-11-18 13:44:06.652840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.789 [2024-11-18 13:44:06.652848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.789 [2024-11-18 13:44:06.653108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.994 ms, result 0 00:31:10.789 00:31:10.789 00:31:10.789 13:44:06 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:13.340 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:13.340 13:44:09 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:13.340 [2024-11-18 13:44:09.142403] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:31:13.340 [2024-11-18 13:44:09.142728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94443 ] 00:31:13.340 [2024-11-18 13:44:09.300576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.340 [2024-11-18 13:44:09.331006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.340 [2024-11-18 13:44:09.446483] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:13.340 [2024-11-18 13:44:09.446830] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:13.604 [2024-11-18 13:44:09.608293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.608521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:13.604 [2024-11-18 13:44:09.608548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:13.604 [2024-11-18 13:44:09.608558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.608634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.608646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:13.604 [2024-11-18 13:44:09.608656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:13.604 [2024-11-18 13:44:09.608669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.608695] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:13.604 [2024-11-18 13:44:09.608962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:13.604 [2024-11-18 13:44:09.608982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.608998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:13.604 [2024-11-18 13:44:09.609008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:31:13.604 [2024-11-18 13:44:09.609021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.609506] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:13.604 [2024-11-18 13:44:09.609540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.609549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:13.604 [2024-11-18 13:44:09.609559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:13.604 [2024-11-18 13:44:09.609575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.609634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.609647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:13.604 [2024-11-18 13:44:09.609656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:13.604 [2024-11-18 13:44:09.609671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.609909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.609921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:13.604 [2024-11-18 13:44:09.609930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:31:13.604 [2024-11-18 13:44:09.609938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.610025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.610035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:13.604 [2024-11-18 13:44:09.610044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:13.604 [2024-11-18 13:44:09.610055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.610080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.610093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:13.604 [2024-11-18 13:44:09.610102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:13.604 [2024-11-18 13:44:09.610109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.610137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:13.604 [2024-11-18 13:44:09.612385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.612424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:13.604 [2024-11-18 13:44:09.612436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:31:13.604 [2024-11-18 13:44:09.612445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.612481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.612490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:13.604 [2024-11-18 13:44:09.612500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:13.604 [2024-11-18 13:44:09.612509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.612567] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:13.604 [2024-11-18 13:44:09.612591] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:13.604 [2024-11-18 13:44:09.612637] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:13.604 [2024-11-18 13:44:09.612654] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:13.604 [2024-11-18 13:44:09.612765] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:13.604 [2024-11-18 13:44:09.612778] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:13.604 [2024-11-18 13:44:09.612789] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:13.604 [2024-11-18 13:44:09.612805] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:13.604 [2024-11-18 13:44:09.612814] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:13.604 [2024-11-18 13:44:09.612826] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:13.604 [2024-11-18 13:44:09.612834] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:13.604 [2024-11-18 13:44:09.612842] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:13.604 [2024-11-18 13:44:09.612849] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:13.604 [2024-11-18 13:44:09.612857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.612865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:13.604 [2024-11-18 13:44:09.612873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:31:13.604 [2024-11-18 13:44:09.612880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.612963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.604 [2024-11-18 13:44:09.612976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:13.604 [2024-11-18 13:44:09.612987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:13.604 [2024-11-18 13:44:09.612995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.604 [2024-11-18 13:44:09.613097] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:13.604 [2024-11-18 13:44:09.613108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:13.604 [2024-11-18 13:44:09.613119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:13.604 [2024-11-18 13:44:09.613131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:13.604 [2024-11-18 13:44:09.613146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:13.604 [2024-11-18 13:44:09.613184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:13.604 [2024-11-18 13:44:09.613192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:13.604 [2024-11-18 13:44:09.613207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:13.604 [2024-11-18 13:44:09.613213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:13.604 [2024-11-18 13:44:09.613222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:13.604 [2024-11-18 13:44:09.613230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:13.604 [2024-11-18 13:44:09.613237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:13.604 [2024-11-18 13:44:09.613245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:13.604 [2024-11-18 13:44:09.613259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:13.604 [2024-11-18 13:44:09.613271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:13.604 [2024-11-18 13:44:09.613285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.604 [2024-11-18 13:44:09.613298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:13.604 [2024-11-18 13:44:09.613306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.604 [2024-11-18 13:44:09.613318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:13.604 [2024-11-18 13:44:09.613325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.604 [2024-11-18 13:44:09.613338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:13.604 [2024-11-18 13:44:09.613345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:13.604 [2024-11-18 13:44:09.613352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:13.604 [2024-11-18 13:44:09.613358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:13.604 [2024-11-18 13:44:09.613364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:13.605 [2024-11-18 13:44:09.613371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:13.605 [2024-11-18 13:44:09.613383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:13.605 [2024-11-18 13:44:09.613390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:13.605 [2024-11-18 13:44:09.613397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:13.605 [2024-11-18 13:44:09.613403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:13.605 [2024-11-18 13:44:09.613410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:13.605 [2024-11-18 13:44:09.613416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.605 [2024-11-18 13:44:09.613422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:13.605 [2024-11-18 13:44:09.613429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:13.605 [2024-11-18 13:44:09.613435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.605 [2024-11-18 13:44:09.613441] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:13.605 [2024-11-18 13:44:09.613452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:13.605 [2024-11-18 13:44:09.613464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:13.605 [2024-11-18 13:44:09.613471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:13.605 [2024-11-18 13:44:09.613481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:13.605 [2024-11-18 13:44:09.613488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:13.605 [2024-11-18 13:44:09.613495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:13.605 [2024-11-18 13:44:09.613504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:13.605 [2024-11-18 13:44:09.613510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:13.605 [2024-11-18 13:44:09.613517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:13.605 [2024-11-18 13:44:09.613525] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:13.605 [2024-11-18 13:44:09.613538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:13.605 [2024-11-18 13:44:09.613546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:13.605 [2024-11-18 13:44:09.613554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:13.605 [2024-11-18 13:44:09.613561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:13.605 [2024-11-18 13:44:09.613567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:13.605 [2024-11-18 13:44:09.613574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:13.605 [2024-11-18 13:44:09.613581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:13.605 [2024-11-18 13:44:09.613588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:13.605 [2024-11-18 13:44:09.613595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:13.605 [2024-11-18 13:44:09.613601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:13.605 [2024-11-18 13:44:09.613609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:13.605 [2024-11-18 13:44:09.613616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:13.605 [2024-11-18 13:44:09.613625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:13.605 [2024-11-18 13:44:09.613631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:13.605 [2024-11-18 13:44:09.613639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:13.605 [2024-11-18 13:44:09.613646] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:13.605 [2024-11-18 13:44:09.613654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:13.605 [2024-11-18 13:44:09.613667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:13.605 [2024-11-18 13:44:09.613674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:13.605 [2024-11-18 13:44:09.613681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:13.605 [2024-11-18 13:44:09.613688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:13.605 [2024-11-18 13:44:09.613695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.613704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:13.605 [2024-11-18 13:44:09.613713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:31:13.605 [2024-11-18 13:44:09.613721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.623991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.624044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:13.605 [2024-11-18 13:44:09.624055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.227 ms 00:31:13.605 [2024-11-18 13:44:09.624062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.624150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.624158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:13.605 [2024-11-18 13:44:09.624200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:13.605 [2024-11-18 13:44:09.624209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.644847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.644928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:13.605 [2024-11-18 13:44:09.644947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.570 ms 00:31:13.605 [2024-11-18 13:44:09.644960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.645022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.645036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:13.605 [2024-11-18 13:44:09.645050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:13.605 [2024-11-18 13:44:09.645062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.645241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.645259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:13.605 [2024-11-18 13:44:09.645279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:13.605 [2024-11-18 13:44:09.645292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.645477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.645498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:13.605 [2024-11-18 13:44:09.645513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:31:13.605 [2024-11-18 13:44:09.645526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.654590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.654648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:13.605 [2024-11-18 13:44:09.654663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.034 ms 00:31:13.605 [2024-11-18 13:44:09.654685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.654854] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:13.605 [2024-11-18 13:44:09.654878] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:13.605 [2024-11-18 13:44:09.654892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.654904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:13.605 [2024-11-18 13:44:09.654916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:13.605 [2024-11-18 13:44:09.654927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.667554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.667776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:13.605 [2024-11-18 13:44:09.667805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.598 ms 00:31:13.605 [2024-11-18 13:44:09.667813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.667953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.667964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:13.605 [2024-11-18 13:44:09.667978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:31:13.605 [2024-11-18 13:44:09.667985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.668047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.668063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:13.605 [2024-11-18 13:44:09.668078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:13.605 [2024-11-18 13:44:09.668086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.668435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.668455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:13.605 [2024-11-18 13:44:09.668472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:31:13.605 [2024-11-18 13:44:09.668480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.605 [2024-11-18 13:44:09.668498] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:13.605 [2024-11-18 13:44:09.668515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.605 [2024-11-18 13:44:09.668523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:13.605 [2024-11-18 13:44:09.668535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:13.606 [2024-11-18 13:44:09.668547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.677993] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:13.606 [2024-11-18 13:44:09.678155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.606 [2024-11-18 13:44:09.678184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:13.606 [2024-11-18 13:44:09.678195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.588 ms 00:31:13.606 [2024-11-18 13:44:09.678204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.680646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.606 [2024-11-18 13:44:09.680681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:13.606 [2024-11-18 13:44:09.680691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:31:13.606 [2024-11-18 13:44:09.680702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.680795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.606 [2024-11-18 13:44:09.680805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:13.606 [2024-11-18 13:44:09.680815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:13.606 [2024-11-18 13:44:09.680823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.680850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.606 [2024-11-18 13:44:09.680859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:13.606 [2024-11-18 13:44:09.680866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:13.606 [2024-11-18 13:44:09.680874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.680907] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:13.606 [2024-11-18 13:44:09.680917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.606 [2024-11-18 13:44:09.680924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:13.606 [2024-11-18 13:44:09.680933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:13.606 [2024-11-18 13:44:09.680941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.687373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.606 [2024-11-18 13:44:09.687432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:13.606 [2024-11-18 13:44:09.687443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.411 ms 00:31:13.606 [2024-11-18 13:44:09.687450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.687539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.606 [2024-11-18 13:44:09.687556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:13.606 [2024-11-18 13:44:09.687564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:13.606 [2024-11-18 13:44:09.687576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.606 [2024-11-18 13:44:09.688868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.985 ms, result 0 00:31:14.995  [2024-11-18T13:44:12.068Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-18T13:44:13.015Z] Copying: 27/1024 [MB] (10 MBps) [2024-11-18T13:44:13.962Z] Copying: 37/1024 [MB] (10 MBps) [2024-11-18T13:44:14.908Z] Copying: 49/1024 [MB] (11 MBps) [2024-11-18T13:44:15.853Z] Copying: 60/1024 [MB] (10 MBps) [2024-11-18T13:44:16.787Z] Copying: 72/1024 [MB] (12 MBps) [2024-11-18T13:44:17.719Z] Copying: 109/1024 [MB] (36 MBps) [2024-11-18T13:44:19.098Z] Copying: 162/1024 [MB] (53 MBps) [2024-11-18T13:44:20.079Z] Copying: 205/1024 [MB] (43 MBps) [2024-11-18T13:44:21.048Z] Copying: 218/1024 [MB] (12 MBps) [2024-11-18T13:44:21.993Z] Copying: 231/1024 [MB] (13 MBps) [2024-11-18T13:44:22.937Z] Copying: 242/1024 [MB] (10 MBps) [2024-11-18T13:44:23.875Z] Copying: 252/1024 [MB] (10 MBps) [2024-11-18T13:44:24.809Z] Copying: 283/1024 [MB] (30 MBps) [2024-11-18T13:44:25.740Z] Copying: 338/1024 [MB] (55 MBps) [2024-11-18T13:44:27.116Z] Copying: 396/1024 [MB] (58 MBps) [2024-11-18T13:44:28.061Z] Copying: 452/1024 [MB] (55 MBps) [2024-11-18T13:44:29.007Z] Copying: 473/1024 [MB] (20 MBps) [2024-11-18T13:44:29.950Z] Copying: 487/1024 [MB] (13 MBps) [2024-11-18T13:44:30.889Z] Copying: 508/1024 [MB] (20 MBps) [2024-11-18T13:44:31.833Z] Copying: 548/1024 [MB] (40 MBps) [2024-11-18T13:44:32.775Z] Copying: 565/1024 [MB] (16 MBps) [2024-11-18T13:44:33.714Z] Copying: 586/1024 [MB] (21 MBps) [2024-11-18T13:44:35.095Z] Copying: 606/1024 [MB] (20 MBps) [2024-11-18T13:44:36.035Z] Copying: 619/1024 [MB] (13 MBps) [2024-11-18T13:44:36.977Z] Copying: 648/1024 [MB] (28 MBps) [2024-11-18T13:44:37.919Z] Copying: 664/1024 [MB] (16 MBps) [2024-11-18T13:44:38.861Z] Copying: 680/1024 [MB] (15 MBps) [2024-11-18T13:44:39.803Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-18T13:44:40.746Z] Copying: 710/1024 [MB] (19 MBps) [2024-11-18T13:44:42.128Z] Copying: 728/1024 [MB] (18 MBps) [2024-11-18T13:44:43.067Z] Copying: 746/1024 [MB] (18 MBps) [2024-11-18T13:44:44.008Z] Copying: 761/1024 [MB] (14 MBps) [2024-11-18T13:44:44.953Z] Copying: 781/1024 [MB] (19 MBps) [2024-11-18T13:44:45.895Z] Copying: 795/1024 [MB] (14 MBps) [2024-11-18T13:44:46.851Z] Copying: 812/1024 [MB] (17 MBps) [2024-11-18T13:44:47.798Z] Copying: 837/1024 [MB] (24 MBps) [2024-11-18T13:44:48.742Z] Copying: 854/1024 [MB] (16 MBps) [2024-11-18T13:44:50.130Z] Copying: 874/1024 [MB] (19 MBps) [2024-11-18T13:44:51.071Z] Copying: 892/1024 [MB] (17 MBps) [2024-11-18T13:44:52.017Z] Copying: 906/1024 [MB] (14 MBps) [2024-11-18T13:44:52.962Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-18T13:44:53.943Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-18T13:44:54.959Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-18T13:44:55.905Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-18T13:44:56.851Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-18T13:44:57.798Z] Copying: 967/1024 [MB] (10 MBps) [2024-11-18T13:44:58.739Z] Copying: 978/1024 [MB] (10 MBps) [2024-11-18T13:45:00.125Z] Copying: 1004/1024 [MB] (26 MBps) [2024-11-18T13:45:01.070Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-18T13:45:01.070Z] Copying: 1048508/1048576 [kB] (924 kBps) [2024-11-18T13:45:01.070Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-18 13:45:00.797784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.942 [2024-11-18 13:45:00.797861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.942 [2024-11-18 13:45:00.797880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:04.942 [2024-11-18 13:45:00.797889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.942 [2024-11-18 13:45:00.800071] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.942 [2024-11-18 13:45:00.803356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.942 [2024-11-18 13:45:00.803408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.942 [2024-11-18 13:45:00.803432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:32:04.943 [2024-11-18 13:45:00.803447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.943 [2024-11-18 13:45:00.814826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.943 [2024-11-18 13:45:00.814895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.943 [2024-11-18 13:45:00.814910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.978 ms 00:32:04.943 [2024-11-18 13:45:00.814918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.943 [2024-11-18 13:45:00.814949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.943 [2024-11-18 13:45:00.814959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:04.943 [2024-11-18 13:45:00.814968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.943 [2024-11-18 13:45:00.814976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.943 [2024-11-18 13:45:00.815035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.943 [2024-11-18 13:45:00.815045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:04.943 [2024-11-18 13:45:00.815067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:04.943 [2024-11-18 13:45:00.815076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.943 [2024-11-18 13:45:00.815089] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.943 [2024-11-18 13:45:00.815102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:32:04.943 [2024-11-18 13:45:00.815113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.943 [2024-11-18 13:45:00.815773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.944 [2024-11-18 13:45:00.815983] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.944 [2024-11-18 13:45:00.815995] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f35d27a6-af2a-4f68-9f73-b853ff45a994 00:32:04.944 [2024-11-18 13:45:00.816004] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:32:04.944 [2024-11-18 13:45:00.816011] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128032 00:32:04.944 [2024-11-18 13:45:00.816018] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:32:04.944 [2024-11-18 13:45:00.816026] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:32:04.944 [2024-11-18 13:45:00.816033] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.944 [2024-11-18 13:45:00.816043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.944 [2024-11-18 13:45:00.816051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.944 [2024-11-18 13:45:00.816057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.944 [2024-11-18 13:45:00.816063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.944 [2024-11-18 13:45:00.816071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.944 [2024-11-18 13:45:00.816079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.944 [2024-11-18 13:45:00.816088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:32:04.944 [2024-11-18 13:45:00.816096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.818393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.944 [2024-11-18 13:45:00.818427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.944 [2024-11-18 13:45:00.818439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:32:04.944 [2024-11-18 13:45:00.818456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.818605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.944 [2024-11-18 13:45:00.818615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.944 [2024-11-18 13:45:00.818624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:04.944 [2024-11-18 13:45:00.818632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.826307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.826357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.944 [2024-11-18 13:45:00.826368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.826377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.826433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.826441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.944 [2024-11-18 13:45:00.826449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.826458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.826515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.826530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.944 [2024-11-18 13:45:00.826541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.826549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.826565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.826574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.944 [2024-11-18 13:45:00.826581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.826589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.839624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.839677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.944 [2024-11-18 13:45:00.839694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.839702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.850400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.850449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.944 [2024-11-18 13:45:00.850461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.850469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.850513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.850522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.944 [2024-11-18 13:45:00.850530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.850539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.850582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.850592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.944 [2024-11-18 13:45:00.850600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.850608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.850660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.850678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.944 [2024-11-18 13:45:00.850686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.850694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.850721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.850730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.944 [2024-11-18 13:45:00.850739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.850747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.850785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.850795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.944 [2024-11-18 13:45:00.850803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.850811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.850859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.944 [2024-11-18 13:45:00.850870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.944 [2024-11-18 13:45:00.850879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.944 [2024-11-18 13:45:00.850886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.944 [2024-11-18 13:45:00.851020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 55.329 ms, result 0 00:32:05.889 00:32:05.889 00:32:05.889 13:45:01 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:05.889 [2024-11-18 13:45:01.950446] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:32:05.889 [2024-11-18 13:45:01.950599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94975 ] 00:32:06.150 [2024-11-18 13:45:02.114263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.150 [2024-11-18 13:45:02.142852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.150 [2024-11-18 13:45:02.257377] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:06.150 [2024-11-18 13:45:02.257462] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:06.414 [2024-11-18 13:45:02.420556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.420621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:06.414 [2024-11-18 13:45:02.420640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:06.414 [2024-11-18 13:45:02.420652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.420717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.420729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:06.414 [2024-11-18 13:45:02.420738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:06.414 [2024-11-18 13:45:02.420746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.420770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:06.414 [2024-11-18 13:45:02.421044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:06.414 [2024-11-18 13:45:02.421060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.421069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:06.414 [2024-11-18 13:45:02.421078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:32:06.414 [2024-11-18 13:45:02.421089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.421390] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:06.414 [2024-11-18 13:45:02.421417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.421425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:06.414 [2024-11-18 13:45:02.421435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:06.414 [2024-11-18 13:45:02.421443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.421505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.421518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:06.414 [2024-11-18 13:45:02.421526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:06.414 [2024-11-18 13:45:02.421533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.421888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.421901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:06.414 [2024-11-18 13:45:02.421910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:32:06.414 [2024-11-18 13:45:02.421920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.422004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.422018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:06.414 [2024-11-18 13:45:02.422027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:06.414 [2024-11-18 13:45:02.422035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.422060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.422069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:06.414 [2024-11-18 13:45:02.422077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:06.414 [2024-11-18 13:45:02.422085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.422111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:06.414 [2024-11-18 13:45:02.424222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.424257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:06.414 [2024-11-18 13:45:02.424269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:32:06.414 [2024-11-18 13:45:02.424278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.424314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.424331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:06.414 [2024-11-18 13:45:02.424346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:06.414 [2024-11-18 13:45:02.424359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.424390] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:06.414 [2024-11-18 13:45:02.424412] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:06.414 [2024-11-18 13:45:02.424451] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:06.414 [2024-11-18 13:45:02.424468] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:06.414 [2024-11-18 13:45:02.424579] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:06.414 [2024-11-18 13:45:02.424594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:06.414 [2024-11-18 13:45:02.424609] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:06.414 [2024-11-18 13:45:02.424621] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:06.414 [2024-11-18 13:45:02.424631] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:06.414 [2024-11-18 13:45:02.424644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:06.414 [2024-11-18 13:45:02.424652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:06.414 [2024-11-18 13:45:02.424661] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:06.414 [2024-11-18 13:45:02.424675] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:06.414 [2024-11-18 13:45:02.424684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.424692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:06.414 [2024-11-18 13:45:02.424702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:32:06.414 [2024-11-18 13:45:02.424711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.424810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.414 [2024-11-18 13:45:02.424819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:06.414 [2024-11-18 13:45:02.424830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:06.414 [2024-11-18 13:45:02.424838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.414 [2024-11-18 13:45:02.424946] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:06.414 [2024-11-18 13:45:02.424957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:06.414 [2024-11-18 13:45:02.424965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:06.414 [2024-11-18 13:45:02.424973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.414 [2024-11-18 13:45:02.424981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:06.414 [2024-11-18 13:45:02.424989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:06.414 [2024-11-18 13:45:02.425000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:06.414 [2024-11-18 13:45:02.425015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:06.414 [2024-11-18 13:45:02.425022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:06.414 [2024-11-18 13:45:02.425029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:06.414 [2024-11-18 13:45:02.425038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:06.414 [2024-11-18 13:45:02.425045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:06.414 [2024-11-18 13:45:02.425052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:06.414 [2024-11-18 13:45:02.425059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:06.414 [2024-11-18 13:45:02.425066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:06.414 [2024-11-18 13:45:02.425073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.414 [2024-11-18 13:45:02.425080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:06.414 [2024-11-18 13:45:02.425088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:06.414 [2024-11-18 13:45:02.425094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:06.415 [2024-11-18 13:45:02.425108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.415 [2024-11-18 13:45:02.425124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:06.415 [2024-11-18 13:45:02.425131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.415 [2024-11-18 13:45:02.425143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:06.415 [2024-11-18 13:45:02.425150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.415 [2024-11-18 13:45:02.425181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:06.415 [2024-11-18 13:45:02.425189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.415 [2024-11-18 13:45:02.425202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:06.415 [2024-11-18 13:45:02.425208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:06.415 [2024-11-18 13:45:02.425222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:06.415 [2024-11-18 13:45:02.425228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:06.415 [2024-11-18 13:45:02.425235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:06.415 [2024-11-18 13:45:02.425241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:06.415 [2024-11-18 13:45:02.425251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:06.415 [2024-11-18 13:45:02.425258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:06.415 [2024-11-18 13:45:02.425273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:06.415 [2024-11-18 13:45:02.425283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425290] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:06.415 [2024-11-18 13:45:02.425298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:06.415 [2024-11-18 13:45:02.425310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:06.415 [2024-11-18 13:45:02.425317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.415 [2024-11-18 13:45:02.425328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:06.415 [2024-11-18 13:45:02.425335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:06.415 [2024-11-18 13:45:02.425342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:06.415 [2024-11-18 13:45:02.425349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:06.415 [2024-11-18 13:45:02.425356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:06.415 [2024-11-18 13:45:02.425364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:06.415 [2024-11-18 13:45:02.425372] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:06.415 [2024-11-18 13:45:02.425388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:06.415 [2024-11-18 13:45:02.425396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:06.415 [2024-11-18 13:45:02.425404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:06.415 [2024-11-18 13:45:02.425411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:06.415 [2024-11-18 13:45:02.425418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:06.415 [2024-11-18 13:45:02.425426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:06.415 [2024-11-18 13:45:02.425434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:06.415 [2024-11-18 13:45:02.425442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:06.415 [2024-11-18 13:45:02.425449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:06.415 [2024-11-18 13:45:02.425456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:06.415 [2024-11-18 13:45:02.425464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:06.415 [2024-11-18 13:45:02.425470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:06.415 [2024-11-18 13:45:02.425478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:06.415 [2024-11-18 13:45:02.425485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:06.415 [2024-11-18 13:45:02.425492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:06.415 [2024-11-18 13:45:02.425499] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:06.415 [2024-11-18 13:45:02.425510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:06.415 [2024-11-18 13:45:02.425519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:06.415 [2024-11-18 13:45:02.425526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:06.415 [2024-11-18 13:45:02.425533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:06.415 [2024-11-18 13:45:02.425543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:06.415 [2024-11-18 13:45:02.425551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.425558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:06.415 [2024-11-18 13:45:02.425566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:32:06.415 [2024-11-18 13:45:02.425574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.435412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.435456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:06.415 [2024-11-18 13:45:02.435467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.798 ms 00:32:06.415 [2024-11-18 13:45:02.435475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.435557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.435571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:06.415 [2024-11-18 13:45:02.435580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:06.415 [2024-11-18 13:45:02.435590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.461748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.461853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:06.415 [2024-11-18 13:45:02.461885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.093 ms 00:32:06.415 [2024-11-18 13:45:02.461918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.462013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.462050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:06.415 [2024-11-18 13:45:02.462073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:06.415 [2024-11-18 13:45:02.462092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.462371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.462404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:06.415 [2024-11-18 13:45:02.462438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:32:06.415 [2024-11-18 13:45:02.462461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.462795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.462836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:06.415 [2024-11-18 13:45:02.462858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:32:06.415 [2024-11-18 13:45:02.462878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.470592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.470637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:06.415 [2024-11-18 13:45:02.470654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.661 ms 00:32:06.415 [2024-11-18 13:45:02.470664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.470791] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:06.415 [2024-11-18 13:45:02.470804] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:06.415 [2024-11-18 13:45:02.470813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.470821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:06.415 [2024-11-18 13:45:02.470830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:06.415 [2024-11-18 13:45:02.470837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.483304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.483343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:06.415 [2024-11-18 13:45:02.483362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.447 ms 00:32:06.415 [2024-11-18 13:45:02.483369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.415 [2024-11-18 13:45:02.483498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.415 [2024-11-18 13:45:02.483508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:06.416 [2024-11-18 13:45:02.483520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:06.416 [2024-11-18 13:45:02.483527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.483580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.483590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:06.416 [2024-11-18 13:45:02.483606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:06.416 [2024-11-18 13:45:02.483614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.483926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.483946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:06.416 [2024-11-18 13:45:02.483955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:32:06.416 [2024-11-18 13:45:02.483962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.483978] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:06.416 [2024-11-18 13:45:02.483987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.483999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:06.416 [2024-11-18 13:45:02.484016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:06.416 [2024-11-18 13:45:02.484031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.493312] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:06.416 [2024-11-18 13:45:02.493469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.493485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:06.416 [2024-11-18 13:45:02.493495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.421 ms 00:32:06.416 [2024-11-18 13:45:02.493503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.496180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.496215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:06.416 [2024-11-18 13:45:02.496224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:32:06.416 [2024-11-18 13:45:02.496232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.496309] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:06.416 [2024-11-18 13:45:02.496908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.496928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:06.416 [2024-11-18 13:45:02.496939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:32:06.416 [2024-11-18 13:45:02.496953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.496981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.496993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:06.416 [2024-11-18 13:45:02.497001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:06.416 [2024-11-18 13:45:02.497008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.497042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:06.416 [2024-11-18 13:45:02.497051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.497059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:06.416 [2024-11-18 13:45:02.497066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:06.416 [2024-11-18 13:45:02.497074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.503125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.503206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:06.416 [2024-11-18 13:45:02.503224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.030 ms 00:32:06.416 [2024-11-18 13:45:02.503237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.503326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.416 [2024-11-18 13:45:02.503337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:06.416 [2024-11-18 13:45:02.503346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:06.416 [2024-11-18 13:45:02.503354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.416 [2024-11-18 13:45:02.504672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.664 ms, result 0 00:32:07.802  [2024-11-18T13:45:04.875Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-18T13:45:05.820Z] Copying: 41/1024 [MB] (20 MBps) [2024-11-18T13:45:06.767Z] Copying: 60/1024 [MB] (19 MBps) [2024-11-18T13:45:07.712Z] Copying: 81/1024 [MB] (20 MBps) [2024-11-18T13:45:09.095Z] Copying: 104/1024 [MB] (22 MBps) [2024-11-18T13:45:10.038Z] Copying: 119/1024 [MB] (15 MBps) [2024-11-18T13:45:10.981Z] Copying: 142/1024 [MB] (22 MBps) [2024-11-18T13:45:11.925Z] Copying: 157/1024 [MB] (15 MBps) [2024-11-18T13:45:12.868Z] Copying: 168/1024 [MB] (10 MBps) [2024-11-18T13:45:13.812Z] Copying: 178/1024 [MB] (10 MBps) [2024-11-18T13:45:14.758Z] Copying: 189/1024 [MB] (10 MBps) [2024-11-18T13:45:15.704Z] Copying: 199/1024 [MB] (10 MBps) [2024-11-18T13:45:17.093Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-18T13:45:18.037Z] Copying: 221/1024 [MB] (10 MBps) [2024-11-18T13:45:18.980Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-18T13:45:19.925Z] Copying: 244/1024 [MB] (11 MBps) [2024-11-18T13:45:20.869Z] Copying: 256/1024 [MB] (11 MBps) [2024-11-18T13:45:21.813Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-18T13:45:22.762Z] Copying: 280/1024 [MB] (12 MBps) [2024-11-18T13:45:23.707Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-18T13:45:25.139Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-18T13:45:25.723Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-18T13:45:27.111Z] Copying: 322/1024 [MB] (10 MBps) [2024-11-18T13:45:28.058Z] Copying: 333/1024 [MB] (10 MBps) [2024-11-18T13:45:29.003Z] Copying: 343/1024 [MB] (10 MBps) [2024-11-18T13:45:29.948Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-18T13:45:30.893Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-18T13:45:31.839Z] Copying: 375/1024 [MB] (10 MBps) [2024-11-18T13:45:32.784Z] Copying: 385/1024 [MB] (10 MBps) [2024-11-18T13:45:33.732Z] Copying: 395/1024 [MB] (10 MBps) [2024-11-18T13:45:35.123Z] Copying: 410/1024 [MB] (14 MBps) [2024-11-18T13:45:36.068Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-18T13:45:37.012Z] Copying: 431/1024 [MB] (10 MBps) [2024-11-18T13:45:37.954Z] Copying: 444/1024 [MB] (13 MBps) [2024-11-18T13:45:38.896Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-18T13:45:39.839Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-18T13:45:40.779Z] Copying: 480/1024 [MB] (12 MBps) [2024-11-18T13:45:41.723Z] Copying: 497/1024 [MB] (16 MBps) [2024-11-18T13:45:43.105Z] Copying: 511/1024 [MB] (14 MBps) [2024-11-18T13:45:44.045Z] Copying: 534/1024 [MB] (22 MBps) [2024-11-18T13:45:44.985Z] Copying: 555/1024 [MB] (21 MBps) [2024-11-18T13:45:45.927Z] Copying: 577/1024 [MB] (22 MBps) [2024-11-18T13:45:46.870Z] Copying: 593/1024 [MB] (16 MBps) [2024-11-18T13:45:47.811Z] Copying: 611/1024 [MB] (17 MBps) [2024-11-18T13:45:48.755Z] Copying: 632/1024 [MB] (20 MBps) [2024-11-18T13:45:49.698Z] Copying: 647/1024 [MB] (14 MBps) [2024-11-18T13:45:51.081Z] Copying: 666/1024 [MB] (19 MBps) [2024-11-18T13:45:52.022Z] Copying: 687/1024 [MB] (21 MBps) [2024-11-18T13:45:52.966Z] Copying: 698/1024 [MB] (10 MBps) [2024-11-18T13:45:53.909Z] Copying: 709/1024 [MB] (10 MBps) [2024-11-18T13:45:54.850Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-18T13:45:55.793Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-18T13:45:56.775Z] Copying: 744/1024 [MB] (11 MBps) [2024-11-18T13:45:57.741Z] Copying: 755/1024 [MB] (11 MBps) [2024-11-18T13:45:59.128Z] Copying: 766/1024 [MB] (10 MBps) [2024-11-18T13:45:59.706Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-18T13:46:01.091Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-18T13:46:02.034Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-18T13:46:02.979Z] Copying: 811/1024 [MB] (11 MBps) [2024-11-18T13:46:03.923Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-18T13:46:04.869Z] Copying: 833/1024 [MB] (10 MBps) [2024-11-18T13:46:05.813Z] Copying: 843/1024 [MB] (10 MBps) [2024-11-18T13:46:06.757Z] Copying: 854/1024 [MB] (10 MBps) [2024-11-18T13:46:07.699Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-18T13:46:09.079Z] Copying: 876/1024 [MB] (11 MBps) [2024-11-18T13:46:10.036Z] Copying: 910/1024 [MB] (33 MBps) [2024-11-18T13:46:10.979Z] Copying: 929/1024 [MB] (19 MBps) [2024-11-18T13:46:11.923Z] Copying: 949/1024 [MB] (19 MBps) [2024-11-18T13:46:12.868Z] Copying: 969/1024 [MB] (19 MBps) [2024-11-18T13:46:13.810Z] Copying: 990/1024 [MB] (20 MBps) [2024-11-18T13:46:14.380Z] Copying: 1011/1024 [MB] (21 MBps) [2024-11-18T13:46:14.642Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 13:46:14.615295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.514 [2024-11-18 13:46:14.615394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:18.514 [2024-11-18 13:46:14.615422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:18.514 [2024-11-18 13:46:14.615440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.514 [2024-11-18 13:46:14.615484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:18.514 [2024-11-18 13:46:14.616413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.514 [2024-11-18 13:46:14.616541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:18.514 [2024-11-18 13:46:14.616563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:33:18.514 [2024-11-18 13:46:14.616581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.514 [2024-11-18 13:46:14.617048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.514 [2024-11-18 13:46:14.617068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:18.514 [2024-11-18 13:46:14.617085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:33:18.514 [2024-11-18 13:46:14.617100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.514 [2024-11-18 13:46:14.617153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.514 [2024-11-18 13:46:14.617212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:18.514 [2024-11-18 13:46:14.617231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:18.514 [2024-11-18 13:46:14.617246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.514 [2024-11-18 13:46:14.617345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.514 [2024-11-18 13:46:14.617367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:18.514 [2024-11-18 13:46:14.617385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:18.514 [2024-11-18 13:46:14.617401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.514 [2024-11-18 13:46:14.617428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:18.514 [2024-11-18 13:46:14.617462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:18.514 [2024-11-18 13:46:14.617488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.617997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:18.514 [2024-11-18 13:46:14.618012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.618986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.619001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.619016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.619031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.619046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.619061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:18.515 [2024-11-18 13:46:14.619093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:18.515 [2024-11-18 13:46:14.619109] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f35d27a6-af2a-4f68-9f73-b853ff45a994 00:33:18.515 [2024-11-18 13:46:14.619136] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:18.515 [2024-11-18 13:46:14.619151] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:33:18.515 [2024-11-18 13:46:14.619182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:33:18.515 [2024-11-18 13:46:14.619199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:33:18.515 [2024-11-18 13:46:14.619267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:18.515 [2024-11-18 13:46:14.619285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:18.515 [2024-11-18 13:46:14.619300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:18.515 [2024-11-18 13:46:14.619314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:18.515 [2024-11-18 13:46:14.619328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:18.515 [2024-11-18 13:46:14.619342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.515 [2024-11-18 13:46:14.619358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:18.515 [2024-11-18 13:46:14.619374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:33:18.515 [2024-11-18 13:46:14.619389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.515 [2024-11-18 13:46:14.621826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.515 [2024-11-18 13:46:14.621871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:18.515 [2024-11-18 13:46:14.621890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:33:18.515 [2024-11-18 13:46:14.621899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.515 [2024-11-18 13:46:14.622021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.515 [2024-11-18 13:46:14.622031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:18.515 [2024-11-18 13:46:14.622040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:33:18.515 [2024-11-18 13:46:14.622048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.515 [2024-11-18 13:46:14.629580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.515 [2024-11-18 13:46:14.629627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:18.515 [2024-11-18 13:46:14.629638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.515 [2024-11-18 13:46:14.629647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.515 [2024-11-18 13:46:14.629718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.515 [2024-11-18 13:46:14.629728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:18.516 [2024-11-18 13:46:14.629737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.516 [2024-11-18 13:46:14.629746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.516 [2024-11-18 13:46:14.629807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.516 [2024-11-18 13:46:14.629819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:18.516 [2024-11-18 13:46:14.629833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.516 [2024-11-18 13:46:14.629847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.516 [2024-11-18 13:46:14.629865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.516 [2024-11-18 13:46:14.629874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:18.516 [2024-11-18 13:46:14.629883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.516 [2024-11-18 13:46:14.629892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.643489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.643538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:18.778 [2024-11-18 13:46:14.643549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.643557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.654390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:18.778 [2024-11-18 13:46:14.654401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.654410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.654469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:18.778 [2024-11-18 13:46:14.654477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.654490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.654534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:18.778 [2024-11-18 13:46:14.654546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.654554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.654631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:18.778 [2024-11-18 13:46:14.654640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.654648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.654682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:18.778 [2024-11-18 13:46:14.654691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.654698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.654749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:18.778 [2024-11-18 13:46:14.654757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.654775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.778 [2024-11-18 13:46:14.654829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:18.778 [2024-11-18 13:46:14.654838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.778 [2024-11-18 13:46:14.654852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.778 [2024-11-18 13:46:14.654983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.674 ms, result 0 00:33:18.778 00:33:18.778 00:33:18.778 13:46:14 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:21.326 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92808 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92808 ']' 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92808 00:33:21.326 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92808) - No such process 00:33:21.326 Process with pid 92808 is not found 00:33:21.326 Remove shared memory files 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 92808 is not found' 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_band_md /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_l2p_l1 /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_l2p_l2 /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_l2p_l2_ctx /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_nvc_md /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_p2l_pool /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_sb /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_sb_shm /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_trim_bitmap /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_trim_log /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_trim_md /dev/hugepages/ftl_f35d27a6-af2a-4f68-9f73-b853ff45a994_vmap 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:21.326 00:33:21.326 real 4m49.104s 00:33:21.326 user 4m37.203s 00:33:21.326 sys 0m11.548s 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:21.326 ************************************ 00:33:21.326 END TEST ftl_restore_fast 00:33:21.326 ************************************ 00:33:21.326 13:46:17 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:21.326 13:46:17 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:21.326 13:46:17 ftl -- ftl/ftl.sh@14 -- # killprocess 83842 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@954 -- # '[' -z 83842 ']' 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@958 -- # kill -0 83842 00:33:21.326 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83842) - No such process 00:33:21.326 Process with pid 83842 is not found 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 83842 is not found' 00:33:21.326 13:46:17 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:21.326 13:46:17 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95770 00:33:21.326 13:46:17 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95770 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@835 -- # '[' -z 95770 ']' 00:33:21.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.326 13:46:17 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:21.326 13:46:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:21.326 [2024-11-18 13:46:17.339825] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 23.11.0 initialization... 00:33:21.326 [2024-11-18 13:46:17.339949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95770 ] 00:33:21.586 [2024-11-18 13:46:17.497915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.586 [2024-11-18 13:46:17.519531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.159 13:46:18 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.159 13:46:18 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:22.159 13:46:18 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:22.421 nvme0n1 00:33:22.421 13:46:18 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:22.421 13:46:18 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:22.421 13:46:18 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:22.682 13:46:18 ftl -- ftl/common.sh@28 -- # stores=17fb5234-0e2f-4d47-9404-39b53d0a0e44 00:33:22.682 13:46:18 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:22.682 13:46:18 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17fb5234-0e2f-4d47-9404-39b53d0a0e44 00:33:22.944 13:46:18 ftl -- ftl/ftl.sh@23 -- # killprocess 95770 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@954 -- # '[' -z 95770 ']' 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@958 -- # kill -0 95770 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@959 -- # uname 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95770 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:22.944 killing process with pid 95770 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95770' 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@973 -- # kill 95770 00:33:22.944 13:46:18 ftl -- common/autotest_common.sh@978 -- # wait 95770 00:33:23.205 13:46:19 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:23.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:23.466 Waiting for block devices as requested 00:33:23.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:23.726 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:23.726 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:23.726 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:29.016 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:29.016 Remove shared memory files 00:33:29.016 13:46:24 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:29.016 13:46:24 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:29.016 13:46:24 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:29.016 13:46:24 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:29.016 13:46:24 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:29.016 13:46:24 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:29.016 13:46:24 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:29.016 00:33:29.016 real 17m56.476s 00:33:29.016 user 19m35.909s 00:33:29.016 sys 1m20.068s 00:33:29.016 ************************************ 00:33:29.016 END TEST ftl 00:33:29.016 13:46:24 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.016 13:46:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:29.016 ************************************ 00:33:29.016 13:46:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:29.016 13:46:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:29.016 13:46:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:29.016 13:46:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:29.016 13:46:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:29.016 13:46:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:29.016 13:46:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:29.016 13:46:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:29.016 13:46:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:29.016 13:46:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:29.016 13:46:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:29.016 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:33:29.016 13:46:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:29.016 13:46:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:29.016 13:46:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:29.016 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:33:30.401 INFO: APP EXITING 00:33:30.401 INFO: killing all VMs 00:33:30.401 INFO: killing vhost app 00:33:30.401 INFO: EXIT DONE 00:33:30.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:31.235 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:31.235 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:31.235 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:31.235 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:31.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:31.757 Cleaning 00:33:31.757 Removing: /var/run/dpdk/spdk0/config 00:33:31.757 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:32.019 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:32.019 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:32.019 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:32.019 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:32.019 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:32.019 Removing: /var/run/dpdk/spdk0 00:33:32.019 Removing: /var/run/dpdk/spdk_pid69349 00:33:32.019 Removing: /var/run/dpdk/spdk_pid69507 00:33:32.019 Removing: /var/run/dpdk/spdk_pid69703 00:33:32.019 Removing: /var/run/dpdk/spdk_pid69791 00:33:32.019 Removing: /var/run/dpdk/spdk_pid69819 00:33:32.019 Removing: /var/run/dpdk/spdk_pid69925 00:33:32.019 Removing: /var/run/dpdk/spdk_pid69943 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70120 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70198 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70273 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70373 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70448 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70488 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70524 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70589 00:33:32.019 Removing: /var/run/dpdk/spdk_pid70679 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71098 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71146 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71187 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71203 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71261 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71277 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71335 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71351 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71393 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71411 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71453 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71471 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71598 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71629 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71714 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71879 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71941 00:33:32.019 Removing: /var/run/dpdk/spdk_pid71972 00:33:32.019 Removing: /var/run/dpdk/spdk_pid72399 00:33:32.019 Removing: /var/run/dpdk/spdk_pid72493 00:33:32.019 Removing: /var/run/dpdk/spdk_pid72591 00:33:32.019 Removing: /var/run/dpdk/spdk_pid72633 00:33:32.019 Removing: /var/run/dpdk/spdk_pid72653 00:33:32.019 Removing: /var/run/dpdk/spdk_pid72727 00:33:32.019 Removing: /var/run/dpdk/spdk_pid73345 00:33:32.019 Removing: /var/run/dpdk/spdk_pid73366 00:33:32.019 Removing: /var/run/dpdk/spdk_pid73831 00:33:32.019 Removing: /var/run/dpdk/spdk_pid73923 00:33:32.019 Removing: /var/run/dpdk/spdk_pid74027 00:33:32.019 Removing: /var/run/dpdk/spdk_pid74080 00:33:32.019 Removing: /var/run/dpdk/spdk_pid74100 00:33:32.019 Removing: /var/run/dpdk/spdk_pid74120 00:33:32.019 Removing: /var/run/dpdk/spdk_pid75961 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76082 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76086 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76098 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76148 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76152 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76164 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76210 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76214 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76226 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76265 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76269 00:33:32.019 Removing: /var/run/dpdk/spdk_pid76281 00:33:32.019 Removing: /var/run/dpdk/spdk_pid77654 00:33:32.019 Removing: /var/run/dpdk/spdk_pid77740 00:33:32.019 Removing: /var/run/dpdk/spdk_pid79135 00:33:32.019 Removing: /var/run/dpdk/spdk_pid80486 00:33:32.019 Removing: /var/run/dpdk/spdk_pid80551 00:33:32.019 Removing: /var/run/dpdk/spdk_pid80600 00:33:32.019 Removing: /var/run/dpdk/spdk_pid80654 00:33:32.019 Removing: /var/run/dpdk/spdk_pid80731 00:33:32.019 Removing: /var/run/dpdk/spdk_pid80794 00:33:32.019 Removing: /var/run/dpdk/spdk_pid80933 00:33:32.019 Removing: /var/run/dpdk/spdk_pid81281 00:33:32.019 Removing: /var/run/dpdk/spdk_pid81301 00:33:32.019 Removing: /var/run/dpdk/spdk_pid81737 00:33:32.019 Removing: /var/run/dpdk/spdk_pid81912 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82004 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82104 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82135 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82166 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82451 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82488 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82534 00:33:32.019 Removing: /var/run/dpdk/spdk_pid82899 00:33:32.019 Removing: /var/run/dpdk/spdk_pid83043 00:33:32.019 Removing: /var/run/dpdk/spdk_pid83842 00:33:32.019 Removing: /var/run/dpdk/spdk_pid83952 00:33:32.019 Removing: /var/run/dpdk/spdk_pid84105 00:33:32.019 Removing: /var/run/dpdk/spdk_pid84180 00:33:32.019 Removing: /var/run/dpdk/spdk_pid84471 00:33:32.019 Removing: /var/run/dpdk/spdk_pid84731 00:33:32.019 Removing: /var/run/dpdk/spdk_pid85070 00:33:32.281 Removing: /var/run/dpdk/spdk_pid85226 00:33:32.281 Removing: /var/run/dpdk/spdk_pid85382 00:33:32.281 Removing: /var/run/dpdk/spdk_pid85419 00:33:32.281 Removing: /var/run/dpdk/spdk_pid85595 00:33:32.281 Removing: /var/run/dpdk/spdk_pid85614 00:33:32.281 Removing: /var/run/dpdk/spdk_pid85650 00:33:32.281 Removing: /var/run/dpdk/spdk_pid85861 00:33:32.281 Removing: /var/run/dpdk/spdk_pid86063 00:33:32.281 Removing: /var/run/dpdk/spdk_pid86883 00:33:32.281 Removing: /var/run/dpdk/spdk_pid87625 00:33:32.281 Removing: /var/run/dpdk/spdk_pid88325 00:33:32.281 Removing: /var/run/dpdk/spdk_pid89160 00:33:32.281 Removing: /var/run/dpdk/spdk_pid89296 00:33:32.281 Removing: /var/run/dpdk/spdk_pid89377 00:33:32.281 Removing: /var/run/dpdk/spdk_pid89712 00:33:32.281 Removing: /var/run/dpdk/spdk_pid89765 00:33:32.281 Removing: /var/run/dpdk/spdk_pid90543 00:33:32.281 Removing: /var/run/dpdk/spdk_pid91012 00:33:32.281 Removing: /var/run/dpdk/spdk_pid91801 00:33:32.281 Removing: /var/run/dpdk/spdk_pid91923 00:33:32.281 Removing: /var/run/dpdk/spdk_pid91959 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92014 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92065 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92118 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92357 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92427 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92488 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92545 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92580 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92630 00:33:32.281 Removing: /var/run/dpdk/spdk_pid92808 00:33:32.282 Removing: /var/run/dpdk/spdk_pid93002 00:33:32.282 Removing: /var/run/dpdk/spdk_pid93748 00:33:32.282 Removing: /var/run/dpdk/spdk_pid94443 00:33:32.282 Removing: /var/run/dpdk/spdk_pid94975 00:33:32.282 Removing: /var/run/dpdk/spdk_pid95770 00:33:32.282 Clean 00:33:32.282 13:46:28 -- common/autotest_common.sh@1453 -- # return 0 00:33:32.282 13:46:28 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:32.282 13:46:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.282 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:33:32.282 13:46:28 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:32.282 13:46:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.282 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:33:32.282 13:46:28 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:32.282 13:46:28 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:32.542 13:46:28 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:32.542 13:46:28 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:32.542 13:46:28 -- spdk/autotest.sh@398 -- # hostname 00:33:32.542 13:46:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:32.542 geninfo: WARNING: invalid characters removed from testname! 00:33:59.191 13:46:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:01.742 13:46:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:04.292 13:47:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:06.843 13:47:02 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:09.387 13:47:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:10.762 13:47:06 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:12.664 13:47:08 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:12.664 13:47:08 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:12.664 13:47:08 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:12.664 13:47:08 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:12.664 13:47:08 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:12.664 13:47:08 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:12.664 + [[ -n 5782 ]] 00:34:12.664 + sudo kill 5782 00:34:12.675 [Pipeline] } 00:34:12.692 [Pipeline] // timeout 00:34:12.698 [Pipeline] } 00:34:12.714 [Pipeline] // stage 00:34:12.720 [Pipeline] } 00:34:12.737 [Pipeline] // catchError 00:34:12.747 [Pipeline] stage 00:34:12.750 [Pipeline] { (Stop VM) 00:34:12.764 [Pipeline] sh 00:34:13.050 + vagrant halt 00:34:15.589 ==> default: Halting domain... 00:34:22.185 [Pipeline] sh 00:34:22.469 + vagrant destroy -f 00:34:25.014 ==> default: Removing domain... 00:34:25.971 [Pipeline] sh 00:34:26.259 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:26.269 [Pipeline] } 00:34:26.284 [Pipeline] // stage 00:34:26.291 [Pipeline] } 00:34:26.306 [Pipeline] // dir 00:34:26.312 [Pipeline] } 00:34:26.329 [Pipeline] // wrap 00:34:26.336 [Pipeline] } 00:34:26.350 [Pipeline] // catchError 00:34:26.361 [Pipeline] stage 00:34:26.363 [Pipeline] { (Epilogue) 00:34:26.379 [Pipeline] sh 00:34:26.669 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:32.050 [Pipeline] catchError 00:34:32.052 [Pipeline] { 00:34:32.064 [Pipeline] sh 00:34:32.350 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:32.350 Artifacts sizes are good 00:34:32.360 [Pipeline] } 00:34:32.375 [Pipeline] // catchError 00:34:32.387 [Pipeline] archiveArtifacts 00:34:32.395 Archiving artifacts 00:34:32.496 [Pipeline] cleanWs 00:34:32.509 [WS-CLEANUP] Deleting project workspace... 00:34:32.509 [WS-CLEANUP] Deferred wipeout is used... 00:34:32.516 [WS-CLEANUP] done 00:34:32.518 [Pipeline] } 00:34:32.534 [Pipeline] // stage 00:34:32.540 [Pipeline] } 00:34:32.554 [Pipeline] // node 00:34:32.560 [Pipeline] End of Pipeline 00:34:32.602 Finished: SUCCESS