00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 135 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3636 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.059 The recommended git tool is: git 00:00:00.059 using credential 00000000-0000-0000-0000-000000000002 00:00:00.061 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.083 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.187 > git --version # 'git version 2.39.2' 00:00:00.187 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.053 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.067 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.081 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.081 > git config core.sparsecheckout # timeout=10 00:00:04.091 > git read-tree -mu HEAD # timeout=10 00:00:04.109 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.128 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.128 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.231 [Pipeline] Start of Pipeline 00:00:04.244 [Pipeline] library 00:00:04.246 Loading library shm_lib@master 00:00:04.246 Library shm_lib@master is cached. Copying from home. 00:00:04.263 [Pipeline] node 00:00:04.278 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.279 [Pipeline] { 00:00:04.291 [Pipeline] catchError 00:00:04.293 [Pipeline] { 00:00:04.304 [Pipeline] wrap 00:00:04.310 [Pipeline] { 00:00:04.319 [Pipeline] stage 00:00:04.321 [Pipeline] { (Prologue) 00:00:04.339 [Pipeline] echo 00:00:04.340 Node: VM-host-SM38 00:00:04.347 [Pipeline] cleanWs 00:00:04.360 [WS-CLEANUP] Deleting project workspace... 00:00:04.360 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.367 [WS-CLEANUP] done 00:00:04.596 [Pipeline] setCustomBuildProperty 00:00:04.663 [Pipeline] httpRequest 00:00:05.198 [Pipeline] echo 00:00:05.200 Sorcerer 10.211.164.20 is alive 00:00:05.210 [Pipeline] retry 00:00:05.213 [Pipeline] { 00:00:05.227 [Pipeline] httpRequest 00:00:05.232 HttpMethod: GET 00:00:05.233 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.233 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.235 Response Code: HTTP/1.1 200 OK 00:00:05.235 Success: Status code 200 is in the accepted range: 200,404 00:00:05.236 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.828 [Pipeline] } 00:00:05.842 [Pipeline] // retry 00:00:05.850 [Pipeline] sh 00:00:06.135 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.148 [Pipeline] httpRequest 00:00:06.449 [Pipeline] echo 00:00:06.450 Sorcerer 10.211.164.20 is alive 00:00:06.459 [Pipeline] retry 00:00:06.461 [Pipeline] { 00:00:06.477 [Pipeline] httpRequest 00:00:06.482 HttpMethod: GET 00:00:06.482 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:06.483 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:06.492 Response Code: HTTP/1.1 200 OK 00:00:06.493 Success: Status code 200 is in the accepted range: 200,404 00:00:06.493 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:51.704 [Pipeline] } 00:01:51.722 [Pipeline] // retry 00:01:51.730 [Pipeline] sh 00:01:52.016 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:54.573 [Pipeline] sh 00:01:54.852 + git -C spdk log --oneline -n5 00:01:54.852 b18e1bd62 version: v24.09.1-pre 00:01:54.852 19524ad45 version: v24.09 00:01:54.852 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:54.852 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:54.852 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:54.875 [Pipeline] withCredentials 00:01:54.887 > git --version # timeout=10 00:01:54.900 > git --version # 'git version 2.39.2' 00:01:54.921 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:54.923 [Pipeline] { 00:01:54.933 [Pipeline] retry 00:01:54.935 [Pipeline] { 00:01:54.951 [Pipeline] sh 00:01:55.236 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:55.509 [Pipeline] } 00:01:55.527 [Pipeline] // retry 00:01:55.534 [Pipeline] } 00:01:55.554 [Pipeline] // withCredentials 00:01:55.565 [Pipeline] httpRequest 00:01:55.925 [Pipeline] echo 00:01:55.927 Sorcerer 10.211.164.20 is alive 00:01:55.939 [Pipeline] retry 00:01:55.942 [Pipeline] { 00:01:55.959 [Pipeline] httpRequest 00:01:55.964 HttpMethod: GET 00:01:55.965 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.966 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.967 Response Code: HTTP/1.1 200 OK 00:01:55.967 Success: Status code 200 is in the accepted range: 200,404 00:01:55.968 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:58.442 [Pipeline] } 00:01:58.459 [Pipeline] // retry 00:01:58.466 [Pipeline] sh 00:01:58.750 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.146 [Pipeline] sh 00:02:00.427 + git -C dpdk log --oneline -n5 00:02:00.427 eeb0605f11 version: 23.11.0 00:02:00.427 238778122a doc: update release notes for 23.11 00:02:00.427 46aa6b3cfc doc: fix description of RSS features 00:02:00.427 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:00.427 7e421ae345 devtools: support skipping forbid rule check 00:02:00.445 [Pipeline] writeFile 00:02:00.458 [Pipeline] sh 00:02:00.741 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:00.755 [Pipeline] sh 00:02:01.040 + cat autorun-spdk.conf 00:02:01.040 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.040 SPDK_TEST_NVME=1 00:02:01.040 SPDK_TEST_FTL=1 00:02:01.040 SPDK_TEST_ISAL=1 00:02:01.040 SPDK_RUN_ASAN=1 00:02:01.040 SPDK_RUN_UBSAN=1 00:02:01.040 SPDK_TEST_XNVME=1 00:02:01.040 SPDK_TEST_NVME_FDP=1 00:02:01.040 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:01.040 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:01.040 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.048 RUN_NIGHTLY=1 00:02:01.050 [Pipeline] } 00:02:01.062 [Pipeline] // stage 00:02:01.076 [Pipeline] stage 00:02:01.078 [Pipeline] { (Run VM) 00:02:01.089 [Pipeline] sh 00:02:01.371 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:01.371 + echo 'Start stage prepare_nvme.sh' 00:02:01.371 Start stage prepare_nvme.sh 00:02:01.371 + [[ -n 3 ]] 00:02:01.371 + disk_prefix=ex3 00:02:01.371 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:01.371 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:01.371 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:01.371 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.371 ++ SPDK_TEST_NVME=1 00:02:01.371 ++ SPDK_TEST_FTL=1 00:02:01.371 ++ SPDK_TEST_ISAL=1 00:02:01.371 ++ SPDK_RUN_ASAN=1 00:02:01.371 ++ SPDK_RUN_UBSAN=1 00:02:01.371 ++ SPDK_TEST_XNVME=1 00:02:01.371 ++ SPDK_TEST_NVME_FDP=1 00:02:01.371 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:01.371 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:01.371 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.371 ++ RUN_NIGHTLY=1 00:02:01.371 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:01.371 + nvme_files=() 00:02:01.371 + declare -A nvme_files 00:02:01.371 + backend_dir=/var/lib/libvirt/images/backends 00:02:01.371 + nvme_files['nvme.img']=5G 00:02:01.371 + nvme_files['nvme-cmb.img']=5G 00:02:01.371 + nvme_files['nvme-multi0.img']=4G 00:02:01.371 + nvme_files['nvme-multi1.img']=4G 00:02:01.371 + nvme_files['nvme-multi2.img']=4G 00:02:01.371 + nvme_files['nvme-openstack.img']=8G 00:02:01.371 + nvme_files['nvme-zns.img']=5G 00:02:01.371 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:01.371 + (( SPDK_TEST_FTL == 1 )) 00:02:01.371 + nvme_files["nvme-ftl.img"]=6G 00:02:01.371 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:01.371 + nvme_files["nvme-fdp.img"]=1G 00:02:01.371 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:01.371 + for nvme in "${!nvme_files[@]}" 00:02:01.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:01.371 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.371 + for nvme in "${!nvme_files[@]}" 00:02:01.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:02:02.316 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:02.316 + for nvme in "${!nvme_files[@]}" 00:02:02.316 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:02.316 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.316 + for nvme in "${!nvme_files[@]}" 00:02:02.316 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:02.316 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:02.316 + for nvme in "${!nvme_files[@]}" 00:02:02.317 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:02.317 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.317 + for nvme in "${!nvme_files[@]}" 00:02:02.317 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:02.578 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:02.578 + for nvme in "${!nvme_files[@]}" 00:02:02.578 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:02.840 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:02.840 + for nvme in "${!nvme_files[@]}" 00:02:02.840 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:02:02.840 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:02.840 + for nvme in "${!nvme_files[@]}" 00:02:02.840 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:03.414 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.414 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:03.414 + echo 'End stage prepare_nvme.sh' 00:02:03.414 End stage prepare_nvme.sh 00:02:03.426 [Pipeline] sh 00:02:03.711 + DISTRO=fedora39 00:02:03.711 + CPUS=10 00:02:03.711 + RAM=12288 00:02:03.711 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:03.711 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:03.711 00:02:03.711 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:03.711 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:03.711 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:03.711 HELP=0 00:02:03.711 DRY_RUN=0 00:02:03.711 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:02:03.711 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:03.711 NVME_AUTO_CREATE=0 00:02:03.711 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:02:03.711 NVME_CMB=,,,, 00:02:03.711 NVME_PMR=,,,, 00:02:03.711 NVME_ZNS=,,,, 00:02:03.711 NVME_MS=true,,,, 00:02:03.711 NVME_FDP=,,,on, 00:02:03.711 SPDK_VAGRANT_DISTRO=fedora39 00:02:03.711 SPDK_VAGRANT_VMCPU=10 00:02:03.711 SPDK_VAGRANT_VMRAM=12288 00:02:03.711 SPDK_VAGRANT_PROVIDER=libvirt 00:02:03.711 SPDK_VAGRANT_HTTP_PROXY= 00:02:03.711 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:03.711 SPDK_OPENSTACK_NETWORK=0 00:02:03.711 VAGRANT_PACKAGE_BOX=0 00:02:03.711 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:03.711 FORCE_DISTRO=true 00:02:03.711 VAGRANT_BOX_VERSION= 00:02:03.711 EXTRA_VAGRANTFILES= 00:02:03.711 NIC_MODEL=e1000 00:02:03.711 00:02:03.711 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:03.711 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:06.248 Bringing machine 'default' up with 'libvirt' provider... 00:02:06.248 ==> default: Creating image (snapshot of base box volume). 00:02:06.507 ==> default: Creating domain with the following settings... 00:02:06.507 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731803638_6bf3d1a6999b4926fd6d 00:02:06.507 ==> default: -- Domain type: kvm 00:02:06.507 ==> default: -- Cpus: 10 00:02:06.507 ==> default: -- Feature: acpi 00:02:06.507 ==> default: -- Feature: apic 00:02:06.507 ==> default: -- Feature: pae 00:02:06.507 ==> default: -- Memory: 12288M 00:02:06.507 ==> default: -- Memory Backing: hugepages: 00:02:06.507 ==> default: -- Management MAC: 00:02:06.507 ==> default: -- Loader: 00:02:06.507 ==> default: -- Nvram: 00:02:06.507 ==> default: -- Base box: spdk/fedora39 00:02:06.507 ==> default: -- Storage pool: default 00:02:06.507 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731803638_6bf3d1a6999b4926fd6d.img (20G) 00:02:06.507 ==> default: -- Volume Cache: default 00:02:06.507 ==> default: -- Kernel: 00:02:06.507 ==> default: -- Initrd: 00:02:06.507 ==> default: -- Graphics Type: vnc 00:02:06.507 ==> default: -- Graphics Port: -1 00:02:06.507 ==> default: -- Graphics IP: 127.0.0.1 00:02:06.507 ==> default: -- Graphics Password: Not defined 00:02:06.507 ==> default: -- Video Type: cirrus 00:02:06.507 ==> default: -- Video VRAM: 9216 00:02:06.507 ==> default: -- Sound Type: 00:02:06.507 ==> default: -- Keymap: en-us 00:02:06.507 ==> default: -- TPM Path: 00:02:06.507 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:06.507 ==> default: -- Command line args: 00:02:06.507 ==> default: -> value=-device, 00:02:06.507 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:06.507 ==> default: -> value=-drive, 00:02:06.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:06.507 ==> default: -> value=-device, 00:02:06.507 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:06.507 ==> default: -> value=-device, 00:02:06.507 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:06.507 ==> default: -> value=-drive, 00:02:06.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:02:06.507 ==> default: -> value=-device, 00:02:06.507 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.507 ==> default: -> value=-device, 00:02:06.507 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:06.507 ==> default: -> value=-drive, 00:02:06.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:06.507 ==> default: -> value=-device, 00:02:06.507 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.507 ==> default: -> value=-drive, 00:02:06.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:06.508 ==> default: -> value=-device, 00:02:06.508 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.508 ==> default: -> value=-drive, 00:02:06.508 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:06.508 ==> default: -> value=-device, 00:02:06.508 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.508 ==> default: -> value=-device, 00:02:06.508 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:06.508 ==> default: -> value=-device, 00:02:06.508 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:06.508 ==> default: -> value=-drive, 00:02:06.508 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:06.508 ==> default: -> value=-device, 00:02:06.508 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.766 ==> default: Creating shared folders metadata... 00:02:06.766 ==> default: Starting domain. 00:02:07.699 ==> default: Waiting for domain to get an IP address... 00:02:19.896 ==> default: Waiting for SSH to become available... 00:02:21.270 ==> default: Configuring and enabling network interfaces... 00:02:25.455 default: SSH address: 192.168.121.200:22 00:02:25.455 default: SSH username: vagrant 00:02:25.455 default: SSH auth method: private key 00:02:27.364 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:33.919 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:38.112 ==> default: Mounting SSHFS shared folder... 00:02:39.496 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:39.756 ==> default: Checking Mount.. 00:02:40.700 ==> default: Folder Successfully Mounted! 00:02:40.961 00:02:40.961 SUCCESS! 00:02:40.961 00:02:40.961 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:40.961 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:40.962 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:40.962 00:02:40.974 [Pipeline] } 00:02:40.990 [Pipeline] // stage 00:02:41.001 [Pipeline] dir 00:02:41.002 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:41.003 [Pipeline] { 00:02:41.018 [Pipeline] catchError 00:02:41.020 [Pipeline] { 00:02:41.035 [Pipeline] sh 00:02:41.400 + vagrant ssh-config --host vagrant 00:02:41.400 + tee ssh_conf 00:02:41.400 + sed -ne '/^Host/,$p' 00:02:43.952 Host vagrant 00:02:43.952 HostName 192.168.121.200 00:02:43.952 User vagrant 00:02:43.952 Port 22 00:02:43.952 UserKnownHostsFile /dev/null 00:02:43.952 StrictHostKeyChecking no 00:02:43.952 PasswordAuthentication no 00:02:43.952 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:43.952 IdentitiesOnly yes 00:02:43.952 LogLevel FATAL 00:02:43.952 ForwardAgent yes 00:02:43.952 ForwardX11 yes 00:02:43.952 00:02:43.968 [Pipeline] withEnv 00:02:43.971 [Pipeline] { 00:02:43.985 [Pipeline] sh 00:02:44.271 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:44.271 source /etc/os-release 00:02:44.271 [[ -e /image.version ]] && img=$(< /image.version) 00:02:44.271 # Minimal, systemd-like check. 00:02:44.271 if [[ -e /.dockerenv ]]; then 00:02:44.271 # Clear garbage from the node'\''s name: 00:02:44.271 # agt-er_autotest_547-896 -> autotest_547-896 00:02:44.271 # $HOSTNAME is the actual container id 00:02:44.271 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:44.271 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:44.271 # We can assume this is a mount from a host where container is running, 00:02:44.271 # so fetch its hostname to easily identify the target swarm worker. 00:02:44.271 container="$(< /etc/hostname) ($agent)" 00:02:44.271 else 00:02:44.271 # Fallback 00:02:44.271 container=$agent 00:02:44.271 fi 00:02:44.271 fi 00:02:44.271 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:44.271 ' 00:02:44.546 [Pipeline] } 00:02:44.562 [Pipeline] // withEnv 00:02:44.571 [Pipeline] setCustomBuildProperty 00:02:44.586 [Pipeline] stage 00:02:44.588 [Pipeline] { (Tests) 00:02:44.606 [Pipeline] sh 00:02:44.892 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:45.167 [Pipeline] sh 00:02:45.453 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:45.732 [Pipeline] timeout 00:02:45.733 Timeout set to expire in 50 min 00:02:45.734 [Pipeline] { 00:02:45.749 [Pipeline] sh 00:02:46.035 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:46.608 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:46.622 [Pipeline] sh 00:02:46.907 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:47.188 [Pipeline] sh 00:02:47.473 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:47.753 [Pipeline] sh 00:02:48.038 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:48.300 ++ readlink -f spdk_repo 00:02:48.300 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:48.300 + [[ -n /home/vagrant/spdk_repo ]] 00:02:48.300 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:48.300 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:48.300 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:48.300 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:48.300 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:48.300 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:48.300 + cd /home/vagrant/spdk_repo 00:02:48.300 + source /etc/os-release 00:02:48.300 ++ NAME='Fedora Linux' 00:02:48.300 ++ VERSION='39 (Cloud Edition)' 00:02:48.300 ++ ID=fedora 00:02:48.300 ++ VERSION_ID=39 00:02:48.300 ++ VERSION_CODENAME= 00:02:48.300 ++ PLATFORM_ID=platform:f39 00:02:48.300 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:48.300 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:48.300 ++ LOGO=fedora-logo-icon 00:02:48.300 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:48.300 ++ HOME_URL=https://fedoraproject.org/ 00:02:48.300 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:48.300 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:48.300 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:48.300 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:48.300 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:48.300 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:48.300 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:48.300 ++ SUPPORT_END=2024-11-12 00:02:48.300 ++ VARIANT='Cloud Edition' 00:02:48.300 ++ VARIANT_ID=cloud 00:02:48.300 + uname -a 00:02:48.300 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:48.300 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:48.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:48.823 Hugepages 00:02:48.823 node hugesize free / total 00:02:48.823 node0 1048576kB 0 / 0 00:02:48.823 node0 2048kB 0 / 0 00:02:48.823 00:02:48.823 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:48.823 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:48.823 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:48.823 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:49.084 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:49.084 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:49.084 + rm -f /tmp/spdk-ld-path 00:02:49.084 + source autorun-spdk.conf 00:02:49.084 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:49.084 ++ SPDK_TEST_NVME=1 00:02:49.084 ++ SPDK_TEST_FTL=1 00:02:49.084 ++ SPDK_TEST_ISAL=1 00:02:49.084 ++ SPDK_RUN_ASAN=1 00:02:49.084 ++ SPDK_RUN_UBSAN=1 00:02:49.084 ++ SPDK_TEST_XNVME=1 00:02:49.084 ++ SPDK_TEST_NVME_FDP=1 00:02:49.084 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:49.084 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:49.084 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:49.084 ++ RUN_NIGHTLY=1 00:02:49.084 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:49.084 + [[ -n '' ]] 00:02:49.084 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:49.084 + for M in /var/spdk/build-*-manifest.txt 00:02:49.084 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:49.084 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:49.084 + for M in /var/spdk/build-*-manifest.txt 00:02:49.084 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:49.084 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:49.084 + for M in /var/spdk/build-*-manifest.txt 00:02:49.084 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:49.084 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:49.084 ++ uname 00:02:49.084 + [[ Linux == \L\i\n\u\x ]] 00:02:49.084 + sudo dmesg -T 00:02:49.084 + sudo dmesg --clear 00:02:49.084 + dmesg_pid=5772 00:02:49.084 + [[ Fedora Linux == FreeBSD ]] 00:02:49.084 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:49.084 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:49.084 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:49.084 + [[ -x /usr/src/fio-static/fio ]] 00:02:49.084 + sudo dmesg -Tw 00:02:49.084 + export FIO_BIN=/usr/src/fio-static/fio 00:02:49.084 + FIO_BIN=/usr/src/fio-static/fio 00:02:49.084 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:49.084 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:49.084 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:49.084 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:49.084 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:49.084 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:49.084 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:49.084 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:49.084 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:49.084 Test configuration: 00:02:49.084 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:49.084 SPDK_TEST_NVME=1 00:02:49.084 SPDK_TEST_FTL=1 00:02:49.084 SPDK_TEST_ISAL=1 00:02:49.084 SPDK_RUN_ASAN=1 00:02:49.084 SPDK_RUN_UBSAN=1 00:02:49.084 SPDK_TEST_XNVME=1 00:02:49.084 SPDK_TEST_NVME_FDP=1 00:02:49.084 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:49.084 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:49.084 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:49.345 RUN_NIGHTLY=1 00:34:41 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:49.345 00:34:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:49.345 00:34:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:49.345 00:34:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:49.345 00:34:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:49.345 00:34:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:49.345 00:34:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.346 00:34:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.346 00:34:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.346 00:34:41 -- paths/export.sh@5 -- $ export PATH 00:02:49.346 00:34:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.346 00:34:41 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:49.346 00:34:41 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:49.346 00:34:41 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731803681.XXXXXX 00:02:49.346 00:34:41 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731803681.ggFNMu 00:02:49.346 00:34:41 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:49.346 00:34:41 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:49.346 00:34:41 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:49.346 00:34:41 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:49.346 00:34:41 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:49.346 00:34:41 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:49.346 00:34:41 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:49.346 00:34:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:49.346 00:34:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.346 00:34:41 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:49.346 00:34:41 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:49.346 00:34:41 -- pm/common@17 -- $ local monitor 00:02:49.346 00:34:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:49.346 00:34:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:49.346 00:34:41 -- pm/common@25 -- $ sleep 1 00:02:49.346 00:34:41 -- pm/common@21 -- $ date +%s 00:02:49.346 00:34:41 -- pm/common@21 -- $ date +%s 00:02:49.346 00:34:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731803681 00:02:49.346 00:34:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731803681 00:02:49.346 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731803681_collect-cpu-load.pm.log 00:02:49.346 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731803681_collect-vmstat.pm.log 00:02:50.291 00:34:42 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:50.291 00:34:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:50.291 00:34:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:50.291 00:34:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:50.291 00:34:42 -- spdk/autobuild.sh@16 -- $ date -u 00:02:50.291 Sun Nov 17 12:34:42 AM UTC 2024 00:02:50.291 00:34:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:50.291 v24.09-1-gb18e1bd62 00:02:50.291 00:34:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:50.291 00:34:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:50.292 00:34:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:50.292 00:34:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:50.292 00:34:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.292 ************************************ 00:02:50.292 START TEST asan 00:02:50.292 ************************************ 00:02:50.292 using asan 00:02:50.292 00:34:42 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:50.292 00:02:50.292 real 0m0.000s 00:02:50.292 user 0m0.000s 00:02:50.292 sys 0m0.000s 00:02:50.292 ************************************ 00:02:50.292 END TEST asan 00:02:50.292 ************************************ 00:02:50.292 00:34:42 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:50.292 00:34:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:50.292 00:34:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:50.292 00:34:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:50.292 00:34:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:50.292 00:34:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:50.292 00:34:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.292 ************************************ 00:02:50.292 START TEST ubsan 00:02:50.292 ************************************ 00:02:50.292 using ubsan 00:02:50.292 00:34:42 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:50.292 00:02:50.292 real 0m0.000s 00:02:50.292 user 0m0.000s 00:02:50.292 sys 0m0.000s 00:02:50.292 00:34:42 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:50.292 00:34:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:50.292 ************************************ 00:02:50.292 END TEST ubsan 00:02:50.292 ************************************ 00:02:50.554 00:34:42 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:50.554 00:34:42 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:50.554 00:34:42 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:50.554 00:34:42 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:50.554 00:34:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:50.554 00:34:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.554 ************************************ 00:02:50.554 START TEST build_native_dpdk 00:02:50.554 ************************************ 00:02:50.554 00:34:42 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:50.554 eeb0605f11 version: 23.11.0 00:02:50.554 238778122a doc: update release notes for 23.11 00:02:50.554 46aa6b3cfc doc: fix description of RSS features 00:02:50.554 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:50.554 7e421ae345 devtools: support skipping forbid rule check 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:50.554 00:34:42 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:50.555 patching file config/rte_config.h 00:02:50.555 Hunk #1 succeeded at 60 (offset 1 line). 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:50.555 patching file lib/pcapng/rte_pcapng.c 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:50.555 00:34:42 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:50.555 00:34:42 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:55.879 The Meson build system 00:02:55.879 Version: 1.5.0 00:02:55.879 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:55.879 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:55.879 Build type: native build 00:02:55.879 Program cat found: YES (/usr/bin/cat) 00:02:55.879 Project name: DPDK 00:02:55.879 Project version: 23.11.0 00:02:55.879 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:55.879 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:55.879 Host machine cpu family: x86_64 00:02:55.879 Host machine cpu: x86_64 00:02:55.879 Message: ## Building in Developer Mode ## 00:02:55.879 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:55.879 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:55.879 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:55.879 Program python3 found: YES (/usr/bin/python3) 00:02:55.879 Program cat found: YES (/usr/bin/cat) 00:02:55.879 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:55.879 Compiler for C supports arguments -march=native: YES 00:02:55.879 Checking for size of "void *" : 8 00:02:55.879 Checking for size of "void *" : 8 (cached) 00:02:55.879 Library m found: YES 00:02:55.879 Library numa found: YES 00:02:55.879 Has header "numaif.h" : YES 00:02:55.879 Library fdt found: NO 00:02:55.879 Library execinfo found: NO 00:02:55.879 Has header "execinfo.h" : YES 00:02:55.879 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:55.879 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:55.879 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:55.879 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:55.879 Run-time dependency openssl found: YES 3.1.1 00:02:55.879 Run-time dependency libpcap found: YES 1.10.4 00:02:55.879 Has header "pcap.h" with dependency libpcap: YES 00:02:55.879 Compiler for C supports arguments -Wcast-qual: YES 00:02:55.879 Compiler for C supports arguments -Wdeprecated: YES 00:02:55.879 Compiler for C supports arguments -Wformat: YES 00:02:55.879 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:55.879 Compiler for C supports arguments -Wformat-security: NO 00:02:55.879 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:55.879 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:55.879 Compiler for C supports arguments -Wnested-externs: YES 00:02:55.879 Compiler for C supports arguments -Wold-style-definition: YES 00:02:55.879 Compiler for C supports arguments -Wpointer-arith: YES 00:02:55.879 Compiler for C supports arguments -Wsign-compare: YES 00:02:55.879 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:55.879 Compiler for C supports arguments -Wundef: YES 00:02:55.879 Compiler for C supports arguments -Wwrite-strings: YES 00:02:55.879 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:55.879 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:55.879 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:55.879 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:55.879 Program objdump found: YES (/usr/bin/objdump) 00:02:55.879 Compiler for C supports arguments -mavx512f: YES 00:02:55.879 Checking if "AVX512 checking" compiles: YES 00:02:55.879 Fetching value of define "__SSE4_2__" : 1 00:02:55.879 Fetching value of define "__AES__" : 1 00:02:55.879 Fetching value of define "__AVX__" : 1 00:02:55.879 Fetching value of define "__AVX2__" : 1 00:02:55.879 Fetching value of define "__AVX512BW__" : 1 00:02:55.879 Fetching value of define "__AVX512CD__" : 1 00:02:55.879 Fetching value of define "__AVX512DQ__" : 1 00:02:55.879 Fetching value of define "__AVX512F__" : 1 00:02:55.879 Fetching value of define "__AVX512VL__" : 1 00:02:55.879 Fetching value of define "__PCLMUL__" : 1 00:02:55.879 Fetching value of define "__RDRND__" : 1 00:02:55.879 Fetching value of define "__RDSEED__" : 1 00:02:55.879 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:55.879 Fetching value of define "__znver1__" : (undefined) 00:02:55.879 Fetching value of define "__znver2__" : (undefined) 00:02:55.879 Fetching value of define "__znver3__" : (undefined) 00:02:55.879 Fetching value of define "__znver4__" : (undefined) 00:02:55.879 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:55.879 Message: lib/log: Defining dependency "log" 00:02:55.879 Message: lib/kvargs: Defining dependency "kvargs" 00:02:55.879 Message: lib/telemetry: Defining dependency "telemetry" 00:02:55.879 Checking for function "getentropy" : NO 00:02:55.879 Message: lib/eal: Defining dependency "eal" 00:02:55.879 Message: lib/ring: Defining dependency "ring" 00:02:55.879 Message: lib/rcu: Defining dependency "rcu" 00:02:55.879 Message: lib/mempool: Defining dependency "mempool" 00:02:55.879 Message: lib/mbuf: Defining dependency "mbuf" 00:02:55.879 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:55.879 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:55.879 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:55.879 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:55.879 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:55.879 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:55.879 Compiler for C supports arguments -mpclmul: YES 00:02:55.879 Compiler for C supports arguments -maes: YES 00:02:55.879 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:55.879 Compiler for C supports arguments -mavx512bw: YES 00:02:55.879 Compiler for C supports arguments -mavx512dq: YES 00:02:55.879 Compiler for C supports arguments -mavx512vl: YES 00:02:55.879 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:55.879 Compiler for C supports arguments -mavx2: YES 00:02:55.879 Compiler for C supports arguments -mavx: YES 00:02:55.879 Message: lib/net: Defining dependency "net" 00:02:55.879 Message: lib/meter: Defining dependency "meter" 00:02:55.879 Message: lib/ethdev: Defining dependency "ethdev" 00:02:55.879 Message: lib/pci: Defining dependency "pci" 00:02:55.879 Message: lib/cmdline: Defining dependency "cmdline" 00:02:55.879 Message: lib/metrics: Defining dependency "metrics" 00:02:55.879 Message: lib/hash: Defining dependency "hash" 00:02:55.879 Message: lib/timer: Defining dependency "timer" 00:02:55.879 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:55.879 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:55.879 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:55.879 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:55.879 Message: lib/acl: Defining dependency "acl" 00:02:55.879 Message: lib/bbdev: Defining dependency "bbdev" 00:02:55.879 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:55.879 Run-time dependency libelf found: YES 0.191 00:02:55.879 Message: lib/bpf: Defining dependency "bpf" 00:02:55.879 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:55.879 Message: lib/compressdev: Defining dependency "compressdev" 00:02:55.879 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:55.879 Message: lib/distributor: Defining dependency "distributor" 00:02:55.879 Message: lib/dmadev: Defining dependency "dmadev" 00:02:55.879 Message: lib/efd: Defining dependency "efd" 00:02:55.879 Message: lib/eventdev: Defining dependency "eventdev" 00:02:55.879 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:55.879 Message: lib/gpudev: Defining dependency "gpudev" 00:02:55.879 Message: lib/gro: Defining dependency "gro" 00:02:55.879 Message: lib/gso: Defining dependency "gso" 00:02:55.879 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:55.879 Message: lib/jobstats: Defining dependency "jobstats" 00:02:55.879 Message: lib/latencystats: Defining dependency "latencystats" 00:02:55.880 Message: lib/lpm: Defining dependency "lpm" 00:02:55.880 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:55.880 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:55.880 Fetching value of define "__AVX512IFMA__" : 1 00:02:55.880 Message: lib/member: Defining dependency "member" 00:02:55.880 Message: lib/pcapng: Defining dependency "pcapng" 00:02:55.880 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:55.880 Message: lib/power: Defining dependency "power" 00:02:55.880 Message: lib/rawdev: Defining dependency "rawdev" 00:02:55.880 Message: lib/regexdev: Defining dependency "regexdev" 00:02:55.880 Message: lib/mldev: Defining dependency "mldev" 00:02:55.880 Message: lib/rib: Defining dependency "rib" 00:02:55.880 Message: lib/reorder: Defining dependency "reorder" 00:02:55.880 Message: lib/sched: Defining dependency "sched" 00:02:55.880 Message: lib/security: Defining dependency "security" 00:02:55.880 Message: lib/stack: Defining dependency "stack" 00:02:55.880 Has header "linux/userfaultfd.h" : YES 00:02:55.880 Has header "linux/vduse.h" : YES 00:02:55.880 Message: lib/vhost: Defining dependency "vhost" 00:02:55.880 Message: lib/ipsec: Defining dependency "ipsec" 00:02:55.880 Message: lib/pdcp: Defining dependency "pdcp" 00:02:55.880 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:55.880 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:55.880 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:55.880 Message: lib/fib: Defining dependency "fib" 00:02:55.880 Message: lib/port: Defining dependency "port" 00:02:55.880 Message: lib/pdump: Defining dependency "pdump" 00:02:55.880 Message: lib/table: Defining dependency "table" 00:02:55.880 Message: lib/pipeline: Defining dependency "pipeline" 00:02:55.880 Message: lib/graph: Defining dependency "graph" 00:02:55.880 Message: lib/node: Defining dependency "node" 00:02:55.880 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:55.880 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:55.880 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:55.880 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:56.454 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:56.454 Compiler for C supports arguments -Wno-unused-value: YES 00:02:56.454 Compiler for C supports arguments -Wno-format: YES 00:02:56.454 Compiler for C supports arguments -Wno-format-security: YES 00:02:56.454 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:56.454 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:56.454 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:56.454 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:56.454 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:56.454 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:56.454 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:56.454 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:56.454 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:56.454 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:56.454 Has header "sys/epoll.h" : YES 00:02:56.454 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:56.454 Configuring doxy-api-html.conf using configuration 00:02:56.454 Configuring doxy-api-man.conf using configuration 00:02:56.454 Program mandb found: YES (/usr/bin/mandb) 00:02:56.454 Program sphinx-build found: NO 00:02:56.454 Configuring rte_build_config.h using configuration 00:02:56.454 Message: 00:02:56.454 ================= 00:02:56.454 Applications Enabled 00:02:56.454 ================= 00:02:56.454 00:02:56.454 apps: 00:02:56.454 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:56.454 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:56.454 test-pmd, test-regex, test-sad, test-security-perf, 00:02:56.454 00:02:56.454 Message: 00:02:56.454 ================= 00:02:56.454 Libraries Enabled 00:02:56.454 ================= 00:02:56.454 00:02:56.454 libs: 00:02:56.454 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:56.454 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:56.454 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:56.454 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:56.454 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:56.454 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:56.454 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:56.454 00:02:56.454 00:02:56.454 Message: 00:02:56.454 =============== 00:02:56.454 Drivers Enabled 00:02:56.454 =============== 00:02:56.454 00:02:56.454 common: 00:02:56.454 00:02:56.454 bus: 00:02:56.454 pci, vdev, 00:02:56.454 mempool: 00:02:56.454 ring, 00:02:56.454 dma: 00:02:56.454 00:02:56.454 net: 00:02:56.454 i40e, 00:02:56.454 raw: 00:02:56.454 00:02:56.454 crypto: 00:02:56.454 00:02:56.454 compress: 00:02:56.454 00:02:56.454 regex: 00:02:56.454 00:02:56.454 ml: 00:02:56.454 00:02:56.454 vdpa: 00:02:56.454 00:02:56.454 event: 00:02:56.454 00:02:56.454 baseband: 00:02:56.454 00:02:56.454 gpu: 00:02:56.454 00:02:56.454 00:02:56.454 Message: 00:02:56.454 ================= 00:02:56.454 Content Skipped 00:02:56.454 ================= 00:02:56.454 00:02:56.454 apps: 00:02:56.454 00:02:56.454 libs: 00:02:56.454 00:02:56.454 drivers: 00:02:56.454 common/cpt: not in enabled drivers build config 00:02:56.454 common/dpaax: not in enabled drivers build config 00:02:56.454 common/iavf: not in enabled drivers build config 00:02:56.454 common/idpf: not in enabled drivers build config 00:02:56.454 common/mvep: not in enabled drivers build config 00:02:56.454 common/octeontx: not in enabled drivers build config 00:02:56.454 bus/auxiliary: not in enabled drivers build config 00:02:56.454 bus/cdx: not in enabled drivers build config 00:02:56.454 bus/dpaa: not in enabled drivers build config 00:02:56.454 bus/fslmc: not in enabled drivers build config 00:02:56.454 bus/ifpga: not in enabled drivers build config 00:02:56.454 bus/platform: not in enabled drivers build config 00:02:56.454 bus/vmbus: not in enabled drivers build config 00:02:56.454 common/cnxk: not in enabled drivers build config 00:02:56.454 common/mlx5: not in enabled drivers build config 00:02:56.454 common/nfp: not in enabled drivers build config 00:02:56.454 common/qat: not in enabled drivers build config 00:02:56.454 common/sfc_efx: not in enabled drivers build config 00:02:56.454 mempool/bucket: not in enabled drivers build config 00:02:56.454 mempool/cnxk: not in enabled drivers build config 00:02:56.455 mempool/dpaa: not in enabled drivers build config 00:02:56.455 mempool/dpaa2: not in enabled drivers build config 00:02:56.455 mempool/octeontx: not in enabled drivers build config 00:02:56.455 mempool/stack: not in enabled drivers build config 00:02:56.455 dma/cnxk: not in enabled drivers build config 00:02:56.455 dma/dpaa: not in enabled drivers build config 00:02:56.455 dma/dpaa2: not in enabled drivers build config 00:02:56.455 dma/hisilicon: not in enabled drivers build config 00:02:56.455 dma/idxd: not in enabled drivers build config 00:02:56.455 dma/ioat: not in enabled drivers build config 00:02:56.455 dma/skeleton: not in enabled drivers build config 00:02:56.455 net/af_packet: not in enabled drivers build config 00:02:56.455 net/af_xdp: not in enabled drivers build config 00:02:56.455 net/ark: not in enabled drivers build config 00:02:56.455 net/atlantic: not in enabled drivers build config 00:02:56.455 net/avp: not in enabled drivers build config 00:02:56.455 net/axgbe: not in enabled drivers build config 00:02:56.455 net/bnx2x: not in enabled drivers build config 00:02:56.455 net/bnxt: not in enabled drivers build config 00:02:56.455 net/bonding: not in enabled drivers build config 00:02:56.455 net/cnxk: not in enabled drivers build config 00:02:56.455 net/cpfl: not in enabled drivers build config 00:02:56.455 net/cxgbe: not in enabled drivers build config 00:02:56.455 net/dpaa: not in enabled drivers build config 00:02:56.455 net/dpaa2: not in enabled drivers build config 00:02:56.455 net/e1000: not in enabled drivers build config 00:02:56.455 net/ena: not in enabled drivers build config 00:02:56.455 net/enetc: not in enabled drivers build config 00:02:56.455 net/enetfec: not in enabled drivers build config 00:02:56.455 net/enic: not in enabled drivers build config 00:02:56.455 net/failsafe: not in enabled drivers build config 00:02:56.455 net/fm10k: not in enabled drivers build config 00:02:56.455 net/gve: not in enabled drivers build config 00:02:56.455 net/hinic: not in enabled drivers build config 00:02:56.455 net/hns3: not in enabled drivers build config 00:02:56.455 net/iavf: not in enabled drivers build config 00:02:56.455 net/ice: not in enabled drivers build config 00:02:56.455 net/idpf: not in enabled drivers build config 00:02:56.455 net/igc: not in enabled drivers build config 00:02:56.455 net/ionic: not in enabled drivers build config 00:02:56.455 net/ipn3ke: not in enabled drivers build config 00:02:56.455 net/ixgbe: not in enabled drivers build config 00:02:56.455 net/mana: not in enabled drivers build config 00:02:56.455 net/memif: not in enabled drivers build config 00:02:56.455 net/mlx4: not in enabled drivers build config 00:02:56.455 net/mlx5: not in enabled drivers build config 00:02:56.455 net/mvneta: not in enabled drivers build config 00:02:56.455 net/mvpp2: not in enabled drivers build config 00:02:56.455 net/netvsc: not in enabled drivers build config 00:02:56.455 net/nfb: not in enabled drivers build config 00:02:56.455 net/nfp: not in enabled drivers build config 00:02:56.455 net/ngbe: not in enabled drivers build config 00:02:56.455 net/null: not in enabled drivers build config 00:02:56.455 net/octeontx: not in enabled drivers build config 00:02:56.455 net/octeon_ep: not in enabled drivers build config 00:02:56.455 net/pcap: not in enabled drivers build config 00:02:56.455 net/pfe: not in enabled drivers build config 00:02:56.455 net/qede: not in enabled drivers build config 00:02:56.455 net/ring: not in enabled drivers build config 00:02:56.455 net/sfc: not in enabled drivers build config 00:02:56.455 net/softnic: not in enabled drivers build config 00:02:56.455 net/tap: not in enabled drivers build config 00:02:56.455 net/thunderx: not in enabled drivers build config 00:02:56.455 net/txgbe: not in enabled drivers build config 00:02:56.455 net/vdev_netvsc: not in enabled drivers build config 00:02:56.455 net/vhost: not in enabled drivers build config 00:02:56.455 net/virtio: not in enabled drivers build config 00:02:56.455 net/vmxnet3: not in enabled drivers build config 00:02:56.455 raw/cnxk_bphy: not in enabled drivers build config 00:02:56.455 raw/cnxk_gpio: not in enabled drivers build config 00:02:56.455 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:56.455 raw/ifpga: not in enabled drivers build config 00:02:56.455 raw/ntb: not in enabled drivers build config 00:02:56.455 raw/skeleton: not in enabled drivers build config 00:02:56.455 crypto/armv8: not in enabled drivers build config 00:02:56.455 crypto/bcmfs: not in enabled drivers build config 00:02:56.455 crypto/caam_jr: not in enabled drivers build config 00:02:56.455 crypto/ccp: not in enabled drivers build config 00:02:56.455 crypto/cnxk: not in enabled drivers build config 00:02:56.455 crypto/dpaa_sec: not in enabled drivers build config 00:02:56.455 crypto/dpaa2_sec: not in enabled drivers build config 00:02:56.455 crypto/ipsec_mb: not in enabled drivers build config 00:02:56.455 crypto/mlx5: not in enabled drivers build config 00:02:56.455 crypto/mvsam: not in enabled drivers build config 00:02:56.455 crypto/nitrox: not in enabled drivers build config 00:02:56.455 crypto/null: not in enabled drivers build config 00:02:56.455 crypto/octeontx: not in enabled drivers build config 00:02:56.455 crypto/openssl: not in enabled drivers build config 00:02:56.455 crypto/scheduler: not in enabled drivers build config 00:02:56.455 crypto/uadk: not in enabled drivers build config 00:02:56.455 crypto/virtio: not in enabled drivers build config 00:02:56.455 compress/isal: not in enabled drivers build config 00:02:56.455 compress/mlx5: not in enabled drivers build config 00:02:56.455 compress/octeontx: not in enabled drivers build config 00:02:56.455 compress/zlib: not in enabled drivers build config 00:02:56.455 regex/mlx5: not in enabled drivers build config 00:02:56.455 regex/cn9k: not in enabled drivers build config 00:02:56.455 ml/cnxk: not in enabled drivers build config 00:02:56.455 vdpa/ifc: not in enabled drivers build config 00:02:56.455 vdpa/mlx5: not in enabled drivers build config 00:02:56.455 vdpa/nfp: not in enabled drivers build config 00:02:56.455 vdpa/sfc: not in enabled drivers build config 00:02:56.455 event/cnxk: not in enabled drivers build config 00:02:56.455 event/dlb2: not in enabled drivers build config 00:02:56.455 event/dpaa: not in enabled drivers build config 00:02:56.455 event/dpaa2: not in enabled drivers build config 00:02:56.455 event/dsw: not in enabled drivers build config 00:02:56.455 event/opdl: not in enabled drivers build config 00:02:56.455 event/skeleton: not in enabled drivers build config 00:02:56.455 event/sw: not in enabled drivers build config 00:02:56.455 event/octeontx: not in enabled drivers build config 00:02:56.455 baseband/acc: not in enabled drivers build config 00:02:56.455 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:56.455 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:56.455 baseband/la12xx: not in enabled drivers build config 00:02:56.455 baseband/null: not in enabled drivers build config 00:02:56.455 baseband/turbo_sw: not in enabled drivers build config 00:02:56.455 gpu/cuda: not in enabled drivers build config 00:02:56.455 00:02:56.455 00:02:56.455 Build targets in project: 215 00:02:56.455 00:02:56.455 DPDK 23.11.0 00:02:56.455 00:02:56.455 User defined options 00:02:56.455 libdir : lib 00:02:56.455 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:56.455 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:56.455 c_link_args : 00:02:56.455 enable_docs : false 00:02:56.455 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:56.455 enable_kmods : false 00:02:56.455 machine : native 00:02:56.455 tests : false 00:02:56.455 00:02:56.455 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.455 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:56.716 00:34:48 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:56.717 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:56.717 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:56.717 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:56.717 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:56.979 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:56.979 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:56.979 [6/705] Linking static target lib/librte_kvargs.a 00:02:56.979 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:56.979 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:56.979 [9/705] Linking static target lib/librte_log.a 00:02:56.979 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:56.979 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:57.240 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.240 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:57.240 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:57.240 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:57.240 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:57.501 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.501 [18/705] Linking target lib/librte_log.so.24.0 00:02:57.501 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:57.501 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:57.501 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:57.501 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:57.501 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:57.501 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:57.763 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:57.763 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:57.763 [27/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:57.763 [28/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:57.763 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:57.763 [30/705] Linking static target lib/librte_telemetry.a 00:02:57.763 [31/705] Linking target lib/librte_kvargs.so.24.0 00:02:58.024 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.024 [33/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:58.024 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.024 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.024 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:58.025 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.025 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:58.025 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:58.025 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:58.025 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:58.284 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.284 [43/705] Linking target lib/librte_telemetry.so.24.0 00:02:58.284 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:58.284 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:58.284 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:58.543 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:58.543 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:58.543 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:58.543 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.543 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:58.543 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:58.543 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:58.543 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.802 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.802 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.802 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.802 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.802 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.802 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:58.802 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.802 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:58.802 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.802 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.802 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.802 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.802 [67/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:59.061 [68/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:59.061 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:59.061 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:59.061 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:59.061 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.061 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.061 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:59.061 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.319 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.319 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:59.320 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:59.320 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:59.320 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.320 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:59.320 [82/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.578 [83/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.578 [84/705] Linking static target lib/librte_ring.a 00:02:59.578 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:59.578 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.578 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.578 [88/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:59.836 [89/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.836 [90/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.836 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:59.836 [92/705] Linking static target lib/librte_eal.a 00:02:59.836 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:59.836 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.836 [95/705] Linking static target lib/librte_mempool.a 00:02:59.836 [96/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.836 [97/705] Linking static target lib/librte_rcu.a 00:03:00.095 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.095 [99/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.095 [100/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.095 [101/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.095 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.095 [103/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.095 [104/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.354 [105/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.354 [106/705] Linking static target lib/librte_mbuf.a 00:03:00.354 [107/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:00.354 [108/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.354 [109/705] Linking static target lib/librte_net.a 00:03:00.354 [110/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.354 [111/705] Linking static target lib/librte_meter.a 00:03:00.354 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.354 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.612 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.612 [115/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.612 [116/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.612 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.612 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.870 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.870 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:01.128 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:01.128 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.128 [123/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:01.128 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:01.128 [125/705] Linking static target lib/librte_pci.a 00:03:01.129 [126/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:01.387 [127/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.387 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.387 [129/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:01.387 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.387 [131/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.387 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:01.387 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:01.387 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:01.387 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.387 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:01.387 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:01.387 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:01.387 [139/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.646 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:01.646 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:01.646 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:01.646 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:01.646 [144/705] Linking static target lib/librte_cmdline.a 00:03:01.646 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:01.646 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:01.646 [147/705] Linking static target lib/librte_metrics.a 00:03:01.904 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:01.904 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:02.163 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.163 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:02.163 [152/705] Linking static target lib/librte_timer.a 00:03:02.163 [153/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.421 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.421 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:02.421 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.421 [157/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:02.421 [158/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:02.680 [159/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:02.680 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:02.680 [161/705] Linking static target lib/librte_bitratestats.a 00:03:02.938 [162/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:02.938 [163/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:02.938 [164/705] Linking static target lib/librte_bbdev.a 00:03:02.938 [165/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.197 [166/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:03.197 [167/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:03.197 [168/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.197 [169/705] Linking static target lib/librte_hash.a 00:03:03.456 [170/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:03.456 [171/705] Linking static target lib/acl/libavx2_tmp.a 00:03:03.456 [172/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:03.456 [173/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.456 [174/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:03.456 [175/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:03.456 [176/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.456 [177/705] Linking static target lib/librte_ethdev.a 00:03:03.714 [178/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:03.714 [179/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.714 [180/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:03.714 [181/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:03.972 [182/705] Linking static target lib/librte_cfgfile.a 00:03:03.972 [183/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:03.972 [184/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.972 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:03.972 [186/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:03.972 [187/705] Linking target lib/librte_eal.so.24.0 00:03:03.972 [188/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.972 [189/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.972 [190/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.230 [191/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:04.230 [192/705] Linking target lib/librte_ring.so.24.0 00:03:04.230 [193/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:04.230 [194/705] Linking target lib/librte_meter.so.24.0 00:03:04.230 [195/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:04.230 [196/705] Linking target lib/librte_rcu.so.24.0 00:03:04.230 [197/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:04.230 [198/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.230 [199/705] Linking target lib/librte_mempool.so.24.0 00:03:04.230 [200/705] Linking target lib/librte_pci.so.24.0 00:03:04.489 [201/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:04.489 [202/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:04.489 [203/705] Linking target lib/librte_timer.so.24.0 00:03:04.489 [204/705] Linking static target lib/librte_acl.a 00:03:04.489 [205/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.489 [206/705] Linking static target lib/librte_bpf.a 00:03:04.489 [207/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:04.489 [208/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:04.489 [209/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.489 [210/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:04.489 [211/705] Linking static target lib/librte_compressdev.a 00:03:04.489 [212/705] Linking target lib/librte_mbuf.so.24.0 00:03:04.489 [213/705] Linking target lib/librte_cfgfile.so.24.0 00:03:04.489 [214/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:04.489 [215/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:04.489 [216/705] Linking target lib/librte_net.so.24.0 00:03:04.489 [217/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.489 [218/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.489 [219/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:04.748 [220/705] Linking target lib/librte_bbdev.so.24.0 00:03:04.748 [221/705] Linking target lib/librte_acl.so.24.0 00:03:04.748 [222/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:04.748 [223/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:04.748 [224/705] Linking target lib/librte_cmdline.so.24.0 00:03:04.748 [225/705] Linking static target lib/librte_distributor.a 00:03:04.748 [226/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.748 [227/705] Linking target lib/librte_hash.so.24.0 00:03:04.748 [228/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:04.748 [229/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:05.006 [230/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.006 [231/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.006 [232/705] Linking target lib/librte_compressdev.so.24.0 00:03:05.006 [233/705] Linking target lib/librte_distributor.so.24.0 00:03:05.006 [234/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:05.006 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.006 [236/705] Linking static target lib/librte_dmadev.a 00:03:05.264 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:05.264 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.264 [239/705] Linking target lib/librte_dmadev.so.24.0 00:03:05.264 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:05.521 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:05.521 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:05.521 [243/705] Linking static target lib/librte_efd.a 00:03:05.521 [244/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.521 [245/705] Linking static target lib/librte_cryptodev.a 00:03:05.780 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.780 [247/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:05.780 [248/705] Linking target lib/librte_efd.so.24.0 00:03:05.780 [249/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:06.038 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:06.038 [251/705] Linking static target lib/librte_dispatcher.a 00:03:06.038 [252/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:06.038 [253/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:06.296 [254/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:06.296 [255/705] Linking static target lib/librte_gpudev.a 00:03:06.296 [256/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:06.296 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.296 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:06.296 [259/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.554 [260/705] Linking target lib/librte_cryptodev.so.24.0 00:03:06.554 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:06.554 [262/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:06.554 [263/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:06.554 [264/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:06.813 [265/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:06.813 [266/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:06.813 [267/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.813 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:06.813 [269/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:06.813 [270/705] Linking static target lib/librte_gro.a 00:03:06.813 [271/705] Linking target lib/librte_gpudev.so.24.0 00:03:06.813 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:06.813 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:06.813 [274/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.071 [275/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:07.071 [276/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.071 [277/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:07.071 [278/705] Linking static target lib/librte_gso.a 00:03:07.071 [279/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:07.071 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:07.071 [281/705] Linking target lib/librte_ethdev.so.24.0 00:03:07.071 [282/705] Linking static target lib/librte_eventdev.a 00:03:07.071 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:07.071 [284/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.071 [285/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:07.328 [286/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:07.328 [287/705] Linking target lib/librte_metrics.so.24.0 00:03:07.328 [288/705] Linking target lib/librte_bpf.so.24.0 00:03:07.328 [289/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:07.328 [290/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:07.328 [291/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:07.328 [292/705] Linking static target lib/librte_jobstats.a 00:03:07.328 [293/705] Linking target lib/librte_gro.so.24.0 00:03:07.328 [294/705] Linking target lib/librte_gso.so.24.0 00:03:07.328 [295/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:07.328 [296/705] Linking target lib/librte_bitratestats.so.24.0 00:03:07.328 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:07.328 [298/705] Linking static target lib/librte_ip_frag.a 00:03:07.328 [299/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:07.328 [300/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:07.586 [301/705] Linking static target lib/librte_latencystats.a 00:03:07.586 [302/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.586 [303/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:07.586 [304/705] Linking target lib/librte_jobstats.so.24.0 00:03:07.586 [305/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.586 [306/705] Linking target lib/librte_ip_frag.so.24.0 00:03:07.586 [307/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:07.586 [308/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.586 [309/705] Linking target lib/librte_latencystats.so.24.0 00:03:07.586 [310/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:07.586 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:07.844 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:07.844 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:07.844 [314/705] Linking static target lib/librte_lpm.a 00:03:07.844 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:07.844 [316/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:08.103 [317/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:08.103 [318/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:08.103 [319/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:08.103 [320/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:08.103 [321/705] Linking static target lib/librte_pcapng.a 00:03:08.103 [322/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.103 [323/705] Linking target lib/librte_lpm.so.24.0 00:03:08.103 [324/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:08.103 [325/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:08.362 [326/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:08.362 [327/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.362 [328/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:08.362 [329/705] Linking target lib/librte_pcapng.so.24.0 00:03:08.362 [330/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:08.362 [331/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:08.362 [332/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:08.362 [333/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:08.620 [334/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:08.620 [335/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:08.620 [336/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:08.620 [337/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.620 [338/705] Linking static target lib/librte_power.a 00:03:08.620 [339/705] Linking target lib/librte_eventdev.so.24.0 00:03:08.620 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:08.620 [341/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:08.620 [342/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:08.620 [343/705] Linking target lib/librte_dispatcher.so.24.0 00:03:08.620 [344/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:08.620 [345/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:08.620 [346/705] Linking static target lib/librte_regexdev.a 00:03:08.620 [347/705] Linking static target lib/librte_mldev.a 00:03:08.878 [348/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:08.878 [349/705] Linking static target lib/librte_rawdev.a 00:03:08.878 [350/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:08.878 [351/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:08.879 [352/705] Linking static target lib/librte_member.a 00:03:08.879 [353/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:09.137 [354/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.137 [355/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:09.137 [356/705] Linking target lib/librte_power.so.24.0 00:03:09.137 [357/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.137 [358/705] Linking target lib/librte_rawdev.so.24.0 00:03:09.137 [359/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:09.137 [360/705] Linking static target lib/librte_reorder.a 00:03:09.137 [361/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:09.137 [362/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.137 [363/705] Linking static target lib/librte_rib.a 00:03:09.137 [364/705] Linking target lib/librte_member.so.24.0 00:03:09.137 [365/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:09.137 [366/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.395 [367/705] Linking target lib/librte_regexdev.so.24.0 00:03:09.395 [368/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:09.395 [369/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:09.395 [370/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.395 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:09.395 [372/705] Linking target lib/librte_reorder.so.24.0 00:03:09.395 [373/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:09.395 [374/705] Linking static target lib/librte_stack.a 00:03:09.395 [375/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:09.395 [376/705] Linking static target lib/librte_security.a 00:03:09.395 [377/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:09.395 [378/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.395 [379/705] Linking target lib/librte_rib.so.24.0 00:03:09.654 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.654 [381/705] Linking target lib/librte_stack.so.24.0 00:03:09.654 [382/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:09.654 [383/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.654 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:09.654 [385/705] Linking target lib/librte_mldev.so.24.0 00:03:09.654 [386/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:09.654 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.912 [388/705] Linking target lib/librte_security.so.24.0 00:03:09.912 [389/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:09.912 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:09.912 [391/705] Linking static target lib/librte_sched.a 00:03:09.912 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:09.912 [393/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.912 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:10.170 [395/705] Linking target lib/librte_sched.so.24.0 00:03:10.171 [396/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:10.171 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:10.171 [398/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:10.171 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:10.428 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:10.428 [401/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:10.687 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:10.687 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:10.687 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:10.687 [405/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:10.945 [406/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:10.945 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:10.945 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:10.945 [409/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:10.945 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:10.945 [411/705] Linking static target lib/librte_ipsec.a 00:03:10.945 [412/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:11.203 [413/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.203 [414/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:11.203 [415/705] Linking target lib/librte_ipsec.so.24.0 00:03:11.203 [416/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:11.203 [417/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:11.203 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:11.461 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:11.461 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:11.461 [421/705] Linking static target lib/librte_fib.a 00:03:11.461 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:11.718 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:11.718 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:11.718 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:11.718 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:11.718 [427/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.718 [428/705] Linking target lib/librte_fib.so.24.0 00:03:11.718 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:11.718 [430/705] Linking static target lib/librte_pdcp.a 00:03:11.976 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.976 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:12.235 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:12.235 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:12.235 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:12.235 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:12.235 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:12.235 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:12.493 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:12.493 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:12.493 [441/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:12.493 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:12.493 [443/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:12.493 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:12.493 [445/705] Linking static target lib/librte_port.a 00:03:12.493 [446/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:12.751 [447/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:12.751 [448/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:12.751 [449/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:12.751 [450/705] Linking static target lib/librte_pdump.a 00:03:12.751 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:13.009 [452/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.009 [453/705] Linking target lib/librte_port.so.24.0 00:03:13.009 [454/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:13.009 [455/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.009 [456/705] Linking target lib/librte_pdump.so.24.0 00:03:13.267 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:13.267 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:13.267 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:13.267 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:13.267 [461/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:13.267 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:13.267 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:13.525 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:13.525 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:13.525 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:13.525 [467/705] Linking static target lib/librte_table.a 00:03:13.783 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:13.783 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:13.784 [470/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:14.042 [471/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.042 [472/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:14.042 [473/705] Linking target lib/librte_table.so.24.0 00:03:14.042 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:14.042 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:14.042 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:14.300 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:14.300 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:14.300 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:14.300 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:14.300 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:14.558 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:14.558 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:14.558 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:14.558 [485/705] Linking static target lib/librte_graph.a 00:03:14.817 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:14.817 [487/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:14.817 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:14.817 [489/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:15.075 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.075 [491/705] Linking target lib/librte_graph.so.24.0 00:03:15.075 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:15.075 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:15.075 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:15.333 [495/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:15.333 [496/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:15.333 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:15.333 [498/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:15.333 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:15.591 [500/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:15.591 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:15.591 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:15.591 [503/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:15.591 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:15.849 [505/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:15.849 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:15.849 [507/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:15.849 [508/705] Linking static target lib/librte_node.a 00:03:15.849 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:15.849 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:15.849 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.849 [512/705] Linking target lib/librte_node.so.24.0 00:03:16.108 [513/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:16.108 [514/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:16.108 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:16.108 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:16.108 [517/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:16.108 [518/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:16.108 [519/705] Linking static target drivers/librte_bus_vdev.a 00:03:16.108 [520/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:16.108 [521/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:16.369 [522/705] Linking static target drivers/librte_bus_pci.a 00:03:16.369 [523/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:16.369 [524/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:16.369 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:16.369 [526/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:16.369 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:16.369 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.369 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:16.369 [530/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:16.369 [531/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:16.369 [532/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:16.673 [533/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:16.673 [534/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.673 [535/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:16.673 [536/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:16.673 [537/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.673 [538/705] Linking static target drivers/librte_mempool_ring.a 00:03:16.673 [539/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.673 [540/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:16.673 [541/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:16.673 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:16.958 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:17.221 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:17.221 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:17.479 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:17.737 [547/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:17.737 [548/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:17.737 [549/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:17.737 [550/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:17.737 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:17.995 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:17.995 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:17.995 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:17.995 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:18.253 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:18.253 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:18.253 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:18.511 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:18.511 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:18.511 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:18.768 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:18.768 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:18.768 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:18.768 [565/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:19.025 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:19.025 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:19.025 [568/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:19.025 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:19.025 [570/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:19.025 [571/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:19.282 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:19.282 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:19.282 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:19.282 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:19.541 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:19.541 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:19.799 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:19.799 [579/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:19.799 [580/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:19.799 [581/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:19.799 [582/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:20.057 [583/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:20.057 [584/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:20.057 [585/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:20.057 [586/705] Linking static target drivers/librte_net_i40e.a 00:03:20.315 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:20.315 [588/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:20.315 [589/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:20.315 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:20.315 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:20.315 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:20.315 [593/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.574 [594/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:20.574 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:20.574 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:20.832 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:20.832 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:20.832 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:20.832 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:20.832 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:20.832 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:21.090 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:21.090 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:21.090 [605/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:21.090 [606/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:21.090 [607/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:21.090 [608/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:21.347 [609/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:21.347 [610/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.347 [611/705] Linking static target lib/librte_vhost.a 00:03:21.347 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:21.347 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:21.605 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:21.605 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:21.605 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:21.863 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.863 [618/705] Linking target lib/librte_vhost.so.24.0 00:03:22.121 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:22.121 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:22.121 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:22.378 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:22.378 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:22.378 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:22.378 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:22.378 [626/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:22.379 [627/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:22.636 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:22.636 [629/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:22.636 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:22.636 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:22.636 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:22.636 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:22.894 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:22.894 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:22.894 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:22.894 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:22.894 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:22.894 [639/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:23.152 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:23.152 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:23.152 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:23.152 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:23.410 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:23.410 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:23.410 [646/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:23.410 [647/705] Linking static target lib/librte_pipeline.a 00:03:23.410 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:23.410 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:23.410 [650/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:23.667 [651/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:23.667 [652/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:23.667 [653/705] Linking target app/dpdk-graph 00:03:23.667 [654/705] Linking target app/dpdk-pdump 00:03:23.925 [655/705] Linking target app/dpdk-dumpcap 00:03:23.925 [656/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:23.925 [657/705] Linking target app/dpdk-proc-info 00:03:23.925 [658/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:23.925 [659/705] Linking target app/dpdk-test-acl 00:03:24.184 [660/705] Linking target app/dpdk-test-compress-perf 00:03:24.184 [661/705] Linking target app/dpdk-test-cmdline 00:03:24.184 [662/705] Linking target app/dpdk-test-dma-perf 00:03:24.184 [663/705] Linking target app/dpdk-test-crypto-perf 00:03:24.184 [664/705] Linking target app/dpdk-test-eventdev 00:03:24.184 [665/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:24.184 [666/705] Linking target app/dpdk-test-fib 00:03:24.442 [667/705] Linking target app/dpdk-test-flow-perf 00:03:24.442 [668/705] Linking target app/dpdk-test-gpudev 00:03:24.442 [669/705] Linking target app/dpdk-test-mldev 00:03:24.442 [670/705] Linking target app/dpdk-test-pipeline 00:03:24.442 [671/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:24.700 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:24.700 [673/705] Linking target app/dpdk-test-bbdev 00:03:24.700 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:24.957 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:24.957 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:24.957 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:24.957 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:25.215 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:25.215 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:25.215 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:25.215 [682/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.215 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:25.215 [684/705] Linking target lib/librte_pipeline.so.24.0 00:03:25.473 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:25.473 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:25.473 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:25.473 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:25.732 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:25.732 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:25.732 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:25.988 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:25.989 [693/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:25.989 [694/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:26.247 [695/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:26.247 [696/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:26.247 [697/705] Linking target app/dpdk-test-sad 00:03:26.506 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:26.506 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:26.506 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:26.506 [701/705] Linking target app/dpdk-test-regex 00:03:26.506 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:26.765 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:26.765 [704/705] Linking target app/dpdk-test-security-perf 00:03:27.023 [705/705] Linking target app/dpdk-testpmd 00:03:27.023 00:35:18 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:27.023 00:35:18 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:27.023 00:35:18 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:27.023 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:27.023 [0/1] Installing files. 00:03:27.285 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.289 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.290 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.290 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.290 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.551 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.552 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.552 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.552 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.552 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:27.552 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.552 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.553 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.554 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.555 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.815 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.815 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:27.815 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:27.815 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:27.815 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:27.815 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:27.815 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:27.815 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:27.815 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:27.815 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:27.815 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:27.815 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:27.815 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:27.815 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:27.815 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:27.815 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:27.815 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:27.815 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:27.815 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:27.815 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:27.815 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:27.815 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:27.815 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:27.815 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:27.815 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:27.815 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:27.815 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:27.815 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:27.815 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:27.815 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:27.815 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:27.815 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:27.815 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:27.815 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:27.815 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:27.815 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:27.816 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:27.816 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:27.816 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:27.816 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:27.816 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:27.816 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:27.816 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:27.816 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:27.816 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:27.816 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:27.816 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:27.816 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:27.816 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:27.816 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:27.816 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:27.816 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:27.816 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:27.816 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:27.816 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:27.816 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:27.816 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:27.816 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:27.816 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:27.816 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:27.816 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:27.816 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:27.816 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:27.816 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:27.816 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:27.816 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:27.816 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:27.816 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:27.816 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:27.816 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:27.816 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:27.816 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:27.816 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:27.816 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:27.816 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:27.816 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:27.816 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:27.816 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:27.816 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:27.816 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:27.816 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:27.816 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:27.816 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:27.816 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:27.816 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:27.816 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:27.816 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:27.816 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:27.816 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:27.816 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:27.816 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:27.816 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:27.816 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:27.816 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:27.816 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:27.816 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:27.816 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:27.816 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:27.816 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:27.816 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:27.816 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:27.816 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:27.816 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:27.816 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:27.816 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:27.816 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:27.816 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:27.816 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:27.816 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:27.816 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:27.816 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:27.816 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:27.816 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:27.816 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:27.816 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:27.816 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:27.816 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:27.816 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:27.816 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:27.816 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:27.816 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:27.816 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:27.816 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:27.816 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:27.816 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:27.816 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:27.816 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:27.816 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:27.816 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:27.816 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:27.816 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:27.816 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:27.816 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:27.816 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:27.816 00:35:19 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:27.816 ************************************ 00:03:27.816 END TEST build_native_dpdk 00:03:27.816 ************************************ 00:03:27.816 00:35:19 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:27.816 00:03:27.816 real 0m37.296s 00:03:27.816 user 4m14.259s 00:03:27.816 sys 0m40.428s 00:03:27.817 00:35:19 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:27.817 00:35:19 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:27.817 00:35:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.817 00:35:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.817 00:35:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.817 00:35:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.817 00:35:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.817 00:35:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.817 00:35:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.817 00:35:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:27.817 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:27.817 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.817 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:28.075 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:28.075 Using 'verbs' RDMA provider 00:03:41.250 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:51.249 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:51.249 Creating mk/config.mk...done. 00:03:51.249 Creating mk/cc.flags.mk...done. 00:03:51.249 Type 'make' to build. 00:03:51.249 00:35:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:51.249 00:35:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:51.249 00:35:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:51.249 00:35:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.249 ************************************ 00:03:51.249 START TEST make 00:03:51.249 ************************************ 00:03:51.249 00:35:43 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:51.510 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:51.510 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:51.510 meson setup builddir \ 00:03:51.510 -Dwith-libaio=enabled \ 00:03:51.510 -Dwith-liburing=enabled \ 00:03:51.510 -Dwith-libvfn=disabled \ 00:03:51.510 -Dwith-spdk=false && \ 00:03:51.510 meson compile -C builddir && \ 00:03:51.510 cd -) 00:03:51.510 make[1]: Nothing to be done for 'all'. 00:03:54.057 The Meson build system 00:03:54.057 Version: 1.5.0 00:03:54.057 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:54.057 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:54.057 Build type: native build 00:03:54.057 Project name: xnvme 00:03:54.057 Project version: 0.7.3 00:03:54.057 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:54.057 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:54.057 Host machine cpu family: x86_64 00:03:54.057 Host machine cpu: x86_64 00:03:54.057 Message: host_machine.system: linux 00:03:54.057 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:54.057 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:54.057 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:54.057 Run-time dependency threads found: YES 00:03:54.057 Has header "setupapi.h" : NO 00:03:54.057 Has header "linux/blkzoned.h" : YES 00:03:54.057 Has header "linux/blkzoned.h" : YES (cached) 00:03:54.057 Has header "libaio.h" : YES 00:03:54.057 Library aio found: YES 00:03:54.057 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:54.057 Run-time dependency liburing found: YES 2.2 00:03:54.057 Dependency libvfn skipped: feature with-libvfn disabled 00:03:54.057 Run-time dependency appleframeworks found: NO (tried framework) 00:03:54.057 Run-time dependency appleframeworks found: NO (tried framework) 00:03:54.057 Configuring xnvme_config.h using configuration 00:03:54.057 Configuring xnvme.spec using configuration 00:03:54.057 Run-time dependency bash-completion found: YES 2.11 00:03:54.057 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:54.057 Program cp found: YES (/usr/bin/cp) 00:03:54.057 Has header "winsock2.h" : NO 00:03:54.057 Has header "dbghelp.h" : NO 00:03:54.057 Library rpcrt4 found: NO 00:03:54.057 Library rt found: YES 00:03:54.057 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:54.057 Found CMake: /usr/bin/cmake (3.27.7) 00:03:54.057 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:54.057 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:54.057 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:54.057 Build targets in project: 32 00:03:54.057 00:03:54.057 xnvme 0.7.3 00:03:54.057 00:03:54.057 User defined options 00:03:54.057 with-libaio : enabled 00:03:54.057 with-liburing: enabled 00:03:54.057 with-libvfn : disabled 00:03:54.057 with-spdk : false 00:03:54.057 00:03:54.057 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:54.317 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:54.317 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:54.317 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:54.317 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:54.317 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:54.317 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:54.317 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:54.317 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:54.317 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:54.317 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:54.317 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:54.317 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:54.317 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:54.578 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:54.578 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:54.578 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:54.578 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:54.578 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:54.578 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:54.578 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:54.578 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:54.578 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:54.578 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:54.578 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:54.578 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:54.578 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:54.578 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:54.578 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:54.579 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:54.579 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:54.579 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:54.579 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:54.579 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:54.579 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:54.579 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:54.579 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:54.579 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:54.579 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:54.579 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:54.579 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:54.579 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:54.579 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:54.579 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:54.579 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:54.579 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:54.579 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:54.579 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:54.579 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:54.841 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:54.841 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:54.841 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:54.841 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:54.841 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:54.841 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:54.841 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:54.841 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:54.841 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:54.841 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:54.841 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:54.841 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:54.841 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:54.841 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:54.841 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:54.841 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:54.841 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:54.841 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:54.841 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:54.841 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:54.841 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:54.841 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:54.841 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:55.102 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:55.102 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:55.102 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:55.102 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:55.102 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:55.102 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:55.102 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:55.102 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:55.102 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:55.102 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:55.102 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:55.102 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:55.102 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:55.102 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:55.102 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:55.102 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:55.102 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:55.362 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:55.362 [89/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:55.362 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:55.362 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:55.362 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:55.362 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:55.362 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:55.362 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:55.362 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:55.362 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:55.362 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:55.362 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:55.362 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:55.362 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:55.362 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:55.362 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:55.362 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:55.362 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:55.362 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:55.362 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:55.362 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:55.362 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:55.362 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:55.362 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:55.362 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:55.362 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:55.362 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:55.362 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:55.362 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:55.362 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:55.362 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:55.362 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:55.362 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:55.362 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:55.362 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:55.362 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:55.662 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:55.662 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:55.662 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:55.662 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:55.662 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:55.662 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:55.662 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:55.662 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:55.662 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:55.662 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:55.662 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:55.662 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:55.662 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:55.662 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:55.662 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:55.662 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:55.662 [140/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:55.662 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:55.662 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:55.662 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:55.662 [144/203] Linking target lib/libxnvme.so 00:03:55.943 [145/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:55.943 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:55.943 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:55.943 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:55.943 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:55.943 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:55.943 [151/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:55.943 [152/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:55.943 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:55.943 [154/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:55.943 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:55.943 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:55.943 [157/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:55.943 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:55.943 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:55.943 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:55.943 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:55.943 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:55.943 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:55.943 [164/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:55.943 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:55.943 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:55.943 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:56.202 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:56.202 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:56.202 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:56.202 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:56.202 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:56.202 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:56.460 [174/203] Linking static target lib/libxnvme.a 00:03:56.460 [175/203] Linking target tests/xnvme_tests_async_intf 00:03:56.460 [176/203] Linking target tests/xnvme_tests_enum 00:03:56.460 [177/203] Linking target tests/xnvme_tests_ioworker 00:03:56.460 [178/203] Linking target tests/xnvme_tests_lblk 00:03:56.460 [179/203] Linking target tests/xnvme_tests_scc 00:03:56.460 [180/203] Linking target tests/xnvme_tests_buf 00:03:56.460 [181/203] Linking target tests/xnvme_tests_cli 00:03:56.460 [182/203] Linking target tests/xnvme_tests_znd_append 00:03:56.460 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:03:56.460 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:56.460 [185/203] Linking target tests/xnvme_tests_znd_state 00:03:56.460 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:56.460 [187/203] Linking target tests/xnvme_tests_kvs 00:03:56.460 [188/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:56.460 [189/203] Linking target tests/xnvme_tests_map 00:03:56.460 [190/203] Linking target tools/xnvme_file 00:03:56.460 [191/203] Linking target tools/xnvme 00:03:56.460 [192/203] Linking target tools/zoned 00:03:56.460 [193/203] Linking target tools/kvs 00:03:56.460 [194/203] Linking target examples/xnvme_io_async 00:03:56.460 [195/203] Linking target tools/lblk 00:03:56.460 [196/203] Linking target tools/xdd 00:03:56.460 [197/203] Linking target examples/xnvme_enum 00:03:56.460 [198/203] Linking target examples/xnvme_dev 00:03:56.460 [199/203] Linking target examples/xnvme_hello 00:03:56.460 [200/203] Linking target examples/xnvme_single_sync 00:03:56.460 [201/203] Linking target examples/xnvme_single_async 00:03:56.460 [202/203] Linking target examples/zoned_io_sync 00:03:56.460 [203/203] Linking target examples/zoned_io_async 00:03:56.461 INFO: autodetecting backend as ninja 00:03:56.461 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:56.461 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:28.531 CC lib/log/log.o 00:04:28.531 CC lib/log/log_flags.o 00:04:28.531 CC lib/log/log_deprecated.o 00:04:28.531 CC lib/ut_mock/mock.o 00:04:28.531 CC lib/ut/ut.o 00:04:28.531 LIB libspdk_ut_mock.a 00:04:28.531 LIB libspdk_log.a 00:04:28.531 SO libspdk_ut_mock.so.6.0 00:04:28.531 LIB libspdk_ut.a 00:04:28.531 SO libspdk_log.so.7.0 00:04:28.531 SO libspdk_ut.so.2.0 00:04:28.531 SYMLINK libspdk_ut_mock.so 00:04:28.531 SYMLINK libspdk_log.so 00:04:28.531 SYMLINK libspdk_ut.so 00:04:28.531 CXX lib/trace_parser/trace.o 00:04:28.531 CC lib/ioat/ioat.o 00:04:28.531 CC lib/dma/dma.o 00:04:28.531 CC lib/util/bit_array.o 00:04:28.531 CC lib/util/base64.o 00:04:28.531 CC lib/util/cpuset.o 00:04:28.531 CC lib/util/crc16.o 00:04:28.531 CC lib/util/crc32.o 00:04:28.531 CC lib/util/crc32c.o 00:04:28.531 CC lib/vfio_user/host/vfio_user_pci.o 00:04:28.531 CC lib/util/crc32_ieee.o 00:04:28.531 CC lib/util/crc64.o 00:04:28.531 CC lib/util/dif.o 00:04:28.531 CC lib/util/fd.o 00:04:28.531 LIB libspdk_dma.a 00:04:28.531 SO libspdk_dma.so.5.0 00:04:28.531 LIB libspdk_ioat.a 00:04:28.531 CC lib/vfio_user/host/vfio_user.o 00:04:28.531 CC lib/util/fd_group.o 00:04:28.531 SO libspdk_ioat.so.7.0 00:04:28.531 CC lib/util/file.o 00:04:28.531 SYMLINK libspdk_dma.so 00:04:28.531 CC lib/util/hexlify.o 00:04:28.531 CC lib/util/iov.o 00:04:28.531 SYMLINK libspdk_ioat.so 00:04:28.531 CC lib/util/math.o 00:04:28.531 CC lib/util/net.o 00:04:28.531 CC lib/util/pipe.o 00:04:28.531 CC lib/util/strerror_tls.o 00:04:28.531 CC lib/util/string.o 00:04:28.531 CC lib/util/uuid.o 00:04:28.531 LIB libspdk_vfio_user.a 00:04:28.531 CC lib/util/xor.o 00:04:28.531 CC lib/util/zipf.o 00:04:28.531 CC lib/util/md5.o 00:04:28.531 SO libspdk_vfio_user.so.5.0 00:04:28.531 SYMLINK libspdk_vfio_user.so 00:04:28.531 LIB libspdk_util.a 00:04:28.531 SO libspdk_util.so.10.0 00:04:28.531 LIB libspdk_trace_parser.a 00:04:28.531 SYMLINK libspdk_util.so 00:04:28.531 SO libspdk_trace_parser.so.6.0 00:04:28.531 SYMLINK libspdk_trace_parser.so 00:04:28.531 CC lib/idxd/idxd.o 00:04:28.531 CC lib/idxd/idxd_user.o 00:04:28.531 CC lib/rdma_utils/rdma_utils.o 00:04:28.531 CC lib/idxd/idxd_kernel.o 00:04:28.531 CC lib/rdma_provider/common.o 00:04:28.531 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:28.531 CC lib/conf/conf.o 00:04:28.531 CC lib/env_dpdk/env.o 00:04:28.531 CC lib/json/json_parse.o 00:04:28.531 CC lib/vmd/vmd.o 00:04:28.531 CC lib/env_dpdk/memory.o 00:04:28.531 CC lib/env_dpdk/pci.o 00:04:28.531 LIB libspdk_rdma_provider.a 00:04:28.531 SO libspdk_rdma_provider.so.6.0 00:04:28.531 LIB libspdk_rdma_utils.a 00:04:28.531 CC lib/json/json_util.o 00:04:28.531 CC lib/json/json_write.o 00:04:28.531 SO libspdk_rdma_utils.so.1.0 00:04:28.531 LIB libspdk_conf.a 00:04:28.531 SYMLINK libspdk_rdma_provider.so 00:04:28.531 CC lib/vmd/led.o 00:04:28.531 SYMLINK libspdk_rdma_utils.so 00:04:28.531 CC lib/env_dpdk/init.o 00:04:28.531 SO libspdk_conf.so.6.0 00:04:28.531 SYMLINK libspdk_conf.so 00:04:28.531 CC lib/env_dpdk/threads.o 00:04:28.531 CC lib/env_dpdk/pci_ioat.o 00:04:28.531 CC lib/env_dpdk/pci_virtio.o 00:04:28.531 CC lib/env_dpdk/pci_vmd.o 00:04:28.531 CC lib/env_dpdk/pci_idxd.o 00:04:28.531 LIB libspdk_json.a 00:04:28.531 CC lib/env_dpdk/pci_event.o 00:04:28.531 SO libspdk_json.so.6.0 00:04:28.531 CC lib/env_dpdk/sigbus_handler.o 00:04:28.531 CC lib/env_dpdk/pci_dpdk.o 00:04:28.531 SYMLINK libspdk_json.so 00:04:28.531 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:28.532 LIB libspdk_idxd.a 00:04:28.532 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:28.532 SO libspdk_idxd.so.12.1 00:04:28.532 LIB libspdk_vmd.a 00:04:28.532 SYMLINK libspdk_idxd.so 00:04:28.532 SO libspdk_vmd.so.6.0 00:04:28.532 SYMLINK libspdk_vmd.so 00:04:28.532 CC lib/jsonrpc/jsonrpc_server.o 00:04:28.532 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:28.532 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:28.532 CC lib/jsonrpc/jsonrpc_client.o 00:04:28.532 LIB libspdk_jsonrpc.a 00:04:28.532 SO libspdk_jsonrpc.so.6.0 00:04:28.532 SYMLINK libspdk_jsonrpc.so 00:04:28.532 CC lib/rpc/rpc.o 00:04:28.532 LIB libspdk_env_dpdk.a 00:04:28.532 SO libspdk_env_dpdk.so.15.0 00:04:28.532 LIB libspdk_rpc.a 00:04:28.532 SO libspdk_rpc.so.6.0 00:04:28.532 SYMLINK libspdk_env_dpdk.so 00:04:28.532 SYMLINK libspdk_rpc.so 00:04:28.532 CC lib/notify/notify_rpc.o 00:04:28.532 CC lib/notify/notify.o 00:04:28.532 CC lib/trace/trace.o 00:04:28.532 CC lib/trace/trace_flags.o 00:04:28.532 CC lib/trace/trace_rpc.o 00:04:28.532 CC lib/keyring/keyring.o 00:04:28.532 CC lib/keyring/keyring_rpc.o 00:04:28.532 LIB libspdk_notify.a 00:04:28.532 SO libspdk_notify.so.6.0 00:04:28.532 SYMLINK libspdk_notify.so 00:04:28.532 LIB libspdk_keyring.a 00:04:28.532 LIB libspdk_trace.a 00:04:28.532 SO libspdk_keyring.so.2.0 00:04:28.532 SO libspdk_trace.so.11.0 00:04:28.532 SYMLINK libspdk_keyring.so 00:04:28.532 SYMLINK libspdk_trace.so 00:04:28.532 CC lib/sock/sock.o 00:04:28.532 CC lib/sock/sock_rpc.o 00:04:28.532 CC lib/thread/thread.o 00:04:28.532 CC lib/thread/iobuf.o 00:04:29.104 LIB libspdk_sock.a 00:04:29.104 SO libspdk_sock.so.10.0 00:04:29.104 SYMLINK libspdk_sock.so 00:04:29.367 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:29.367 CC lib/nvme/nvme_ctrlr.o 00:04:29.367 CC lib/nvme/nvme_ns_cmd.o 00:04:29.367 CC lib/nvme/nvme_fabric.o 00:04:29.367 CC lib/nvme/nvme_ns.o 00:04:29.367 CC lib/nvme/nvme_pcie_common.o 00:04:29.367 CC lib/nvme/nvme_pcie.o 00:04:29.367 CC lib/nvme/nvme.o 00:04:29.367 CC lib/nvme/nvme_qpair.o 00:04:30.312 CC lib/nvme/nvme_quirks.o 00:04:30.312 CC lib/nvme/nvme_transport.o 00:04:30.312 CC lib/nvme/nvme_discovery.o 00:04:30.312 LIB libspdk_thread.a 00:04:30.312 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:30.312 SO libspdk_thread.so.10.1 00:04:30.312 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:30.312 SYMLINK libspdk_thread.so 00:04:30.312 CC lib/nvme/nvme_tcp.o 00:04:30.312 CC lib/nvme/nvme_opal.o 00:04:30.312 CC lib/nvme/nvme_io_msg.o 00:04:30.577 CC lib/nvme/nvme_poll_group.o 00:04:30.577 CC lib/nvme/nvme_zns.o 00:04:30.852 CC lib/nvme/nvme_stubs.o 00:04:30.852 CC lib/nvme/nvme_auth.o 00:04:30.852 CC lib/nvme/nvme_cuse.o 00:04:30.852 CC lib/nvme/nvme_rdma.o 00:04:31.111 CC lib/accel/accel.o 00:04:31.111 CC lib/blob/blobstore.o 00:04:31.111 CC lib/blob/request.o 00:04:31.111 CC lib/blob/zeroes.o 00:04:31.369 CC lib/accel/accel_rpc.o 00:04:31.369 CC lib/blob/blob_bs_dev.o 00:04:31.369 CC lib/accel/accel_sw.o 00:04:31.628 CC lib/init/json_config.o 00:04:31.628 CC lib/init/subsystem.o 00:04:31.628 CC lib/init/subsystem_rpc.o 00:04:31.628 CC lib/virtio/virtio.o 00:04:31.885 CC lib/init/rpc.o 00:04:31.885 CC lib/virtio/virtio_vhost_user.o 00:04:31.885 CC lib/virtio/virtio_vfio_user.o 00:04:31.885 CC lib/virtio/virtio_pci.o 00:04:31.885 LIB libspdk_init.a 00:04:31.885 CC lib/fsdev/fsdev.o 00:04:31.885 CC lib/fsdev/fsdev_io.o 00:04:31.885 SO libspdk_init.so.6.0 00:04:31.885 LIB libspdk_accel.a 00:04:32.143 SYMLINK libspdk_init.so 00:04:32.143 CC lib/fsdev/fsdev_rpc.o 00:04:32.143 SO libspdk_accel.so.16.0 00:04:32.143 SYMLINK libspdk_accel.so 00:04:32.143 LIB libspdk_virtio.a 00:04:32.143 SO libspdk_virtio.so.7.0 00:04:32.143 CC lib/event/app.o 00:04:32.143 CC lib/event/reactor.o 00:04:32.143 CC lib/event/log_rpc.o 00:04:32.143 CC lib/event/app_rpc.o 00:04:32.143 SYMLINK libspdk_virtio.so 00:04:32.143 CC lib/event/scheduler_static.o 00:04:32.143 LIB libspdk_nvme.a 00:04:32.143 CC lib/bdev/bdev.o 00:04:32.402 CC lib/bdev/bdev_rpc.o 00:04:32.402 CC lib/bdev/bdev_zone.o 00:04:32.402 CC lib/bdev/part.o 00:04:32.402 SO libspdk_nvme.so.14.0 00:04:32.402 CC lib/bdev/scsi_nvme.o 00:04:32.660 LIB libspdk_fsdev.a 00:04:32.660 SO libspdk_fsdev.so.1.0 00:04:32.660 LIB libspdk_event.a 00:04:32.660 SYMLINK libspdk_nvme.so 00:04:32.660 SYMLINK libspdk_fsdev.so 00:04:32.660 SO libspdk_event.so.14.0 00:04:32.660 SYMLINK libspdk_event.so 00:04:32.919 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:33.485 LIB libspdk_fuse_dispatcher.a 00:04:33.485 SO libspdk_fuse_dispatcher.so.1.0 00:04:33.743 SYMLINK libspdk_fuse_dispatcher.so 00:04:33.743 LIB libspdk_blob.a 00:04:34.001 SO libspdk_blob.so.11.0 00:04:34.001 SYMLINK libspdk_blob.so 00:04:34.259 CC lib/lvol/lvol.o 00:04:34.259 CC lib/blobfs/blobfs.o 00:04:34.259 CC lib/blobfs/tree.o 00:04:34.829 LIB libspdk_bdev.a 00:04:34.829 SO libspdk_bdev.so.16.0 00:04:34.829 SYMLINK libspdk_bdev.so 00:04:35.087 CC lib/ublk/ublk.o 00:04:35.087 CC lib/scsi/dev.o 00:04:35.087 CC lib/ublk/ublk_rpc.o 00:04:35.087 CC lib/scsi/lun.o 00:04:35.087 CC lib/nvmf/ctrlr.o 00:04:35.087 CC lib/nvmf/ctrlr_discovery.o 00:04:35.087 CC lib/ftl/ftl_core.o 00:04:35.087 CC lib/nbd/nbd.o 00:04:35.087 LIB libspdk_blobfs.a 00:04:35.087 SO libspdk_blobfs.so.10.0 00:04:35.087 LIB libspdk_lvol.a 00:04:35.087 SYMLINK libspdk_blobfs.so 00:04:35.087 CC lib/nbd/nbd_rpc.o 00:04:35.087 CC lib/nvmf/ctrlr_bdev.o 00:04:35.087 SO libspdk_lvol.so.10.0 00:04:35.346 CC lib/nvmf/subsystem.o 00:04:35.346 SYMLINK libspdk_lvol.so 00:04:35.346 CC lib/nvmf/nvmf.o 00:04:35.346 CC lib/nvmf/nvmf_rpc.o 00:04:35.346 CC lib/scsi/port.o 00:04:35.346 CC lib/nvmf/transport.o 00:04:35.346 CC lib/ftl/ftl_init.o 00:04:35.346 LIB libspdk_nbd.a 00:04:35.605 SO libspdk_nbd.so.7.0 00:04:35.605 CC lib/scsi/scsi.o 00:04:35.605 SYMLINK libspdk_nbd.so 00:04:35.605 CC lib/ftl/ftl_layout.o 00:04:35.605 CC lib/nvmf/tcp.o 00:04:35.605 CC lib/scsi/scsi_bdev.o 00:04:35.605 LIB libspdk_ublk.a 00:04:35.605 SO libspdk_ublk.so.3.0 00:04:35.863 SYMLINK libspdk_ublk.so 00:04:35.863 CC lib/scsi/scsi_pr.o 00:04:35.863 CC lib/ftl/ftl_debug.o 00:04:35.863 CC lib/nvmf/stubs.o 00:04:35.863 CC lib/nvmf/mdns_server.o 00:04:35.863 CC lib/nvmf/rdma.o 00:04:36.121 CC lib/ftl/ftl_io.o 00:04:36.121 CC lib/scsi/scsi_rpc.o 00:04:36.121 CC lib/nvmf/auth.o 00:04:36.121 CC lib/ftl/ftl_sb.o 00:04:36.121 CC lib/ftl/ftl_l2p.o 00:04:36.121 CC lib/scsi/task.o 00:04:36.379 CC lib/ftl/ftl_l2p_flat.o 00:04:36.379 CC lib/ftl/ftl_nv_cache.o 00:04:36.379 CC lib/ftl/ftl_band.o 00:04:36.379 CC lib/ftl/ftl_band_ops.o 00:04:36.379 CC lib/ftl/ftl_writer.o 00:04:36.379 LIB libspdk_scsi.a 00:04:36.379 CC lib/ftl/ftl_rq.o 00:04:36.379 SO libspdk_scsi.so.9.0 00:04:36.379 CC lib/ftl/ftl_reloc.o 00:04:36.637 SYMLINK libspdk_scsi.so 00:04:36.637 CC lib/ftl/ftl_l2p_cache.o 00:04:36.637 CC lib/ftl/ftl_p2l.o 00:04:36.637 CC lib/ftl/ftl_p2l_log.o 00:04:36.637 CC lib/iscsi/conn.o 00:04:36.637 CC lib/iscsi/init_grp.o 00:04:36.895 CC lib/ftl/mngt/ftl_mngt.o 00:04:36.895 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:36.895 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:36.895 CC lib/iscsi/iscsi.o 00:04:36.895 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:36.895 CC lib/iscsi/param.o 00:04:37.153 CC lib/iscsi/portal_grp.o 00:04:37.153 CC lib/iscsi/tgt_node.o 00:04:37.153 CC lib/iscsi/iscsi_subsystem.o 00:04:37.153 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:37.153 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:37.153 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:37.153 CC lib/vhost/vhost.o 00:04:37.411 CC lib/vhost/vhost_rpc.o 00:04:37.411 CC lib/vhost/vhost_scsi.o 00:04:37.411 CC lib/vhost/vhost_blk.o 00:04:37.411 CC lib/iscsi/iscsi_rpc.o 00:04:37.411 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:37.411 CC lib/vhost/rte_vhost_user.o 00:04:37.411 CC lib/iscsi/task.o 00:04:37.668 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:37.668 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:37.668 LIB libspdk_nvmf.a 00:04:37.668 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:37.668 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:37.668 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:37.668 SO libspdk_nvmf.so.19.0 00:04:37.927 CC lib/ftl/utils/ftl_conf.o 00:04:37.927 CC lib/ftl/utils/ftl_md.o 00:04:37.927 CC lib/ftl/utils/ftl_mempool.o 00:04:37.927 CC lib/ftl/utils/ftl_bitmap.o 00:04:37.927 SYMLINK libspdk_nvmf.so 00:04:37.927 CC lib/ftl/utils/ftl_property.o 00:04:37.927 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:37.927 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:37.927 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:37.927 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:37.927 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:37.927 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:38.185 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:38.185 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:38.185 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:38.185 LIB libspdk_vhost.a 00:04:38.185 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:38.185 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:38.185 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:38.185 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:38.185 SO libspdk_vhost.so.8.0 00:04:38.185 CC lib/ftl/base/ftl_base_dev.o 00:04:38.185 SYMLINK libspdk_vhost.so 00:04:38.185 CC lib/ftl/base/ftl_base_bdev.o 00:04:38.185 CC lib/ftl/ftl_trace.o 00:04:38.444 LIB libspdk_iscsi.a 00:04:38.444 SO libspdk_iscsi.so.8.0 00:04:38.444 LIB libspdk_ftl.a 00:04:38.702 SYMLINK libspdk_iscsi.so 00:04:38.702 SO libspdk_ftl.so.9.0 00:04:38.960 SYMLINK libspdk_ftl.so 00:04:39.219 CC module/env_dpdk/env_dpdk_rpc.o 00:04:39.219 CC module/fsdev/aio/fsdev_aio.o 00:04:39.219 CC module/keyring/file/keyring.o 00:04:39.219 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:39.219 CC module/scheduler/gscheduler/gscheduler.o 00:04:39.219 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:39.219 CC module/accel/error/accel_error.o 00:04:39.219 CC module/blob/bdev/blob_bdev.o 00:04:39.219 CC module/sock/posix/posix.o 00:04:39.219 CC module/keyring/linux/keyring.o 00:04:39.219 LIB libspdk_env_dpdk_rpc.a 00:04:39.219 SO libspdk_env_dpdk_rpc.so.6.0 00:04:39.219 SYMLINK libspdk_env_dpdk_rpc.so 00:04:39.219 CC module/keyring/linux/keyring_rpc.o 00:04:39.219 LIB libspdk_scheduler_dpdk_governor.a 00:04:39.219 CC module/keyring/file/keyring_rpc.o 00:04:39.219 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:39.219 LIB libspdk_scheduler_gscheduler.a 00:04:39.219 LIB libspdk_scheduler_dynamic.a 00:04:39.219 CC module/accel/error/accel_error_rpc.o 00:04:39.219 SO libspdk_scheduler_gscheduler.so.4.0 00:04:39.219 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:39.219 SO libspdk_scheduler_dynamic.so.4.0 00:04:39.219 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:39.219 CC module/fsdev/aio/linux_aio_mgr.o 00:04:39.477 SYMLINK libspdk_scheduler_gscheduler.so 00:04:39.477 SYMLINK libspdk_scheduler_dynamic.so 00:04:39.477 LIB libspdk_keyring_linux.a 00:04:39.477 LIB libspdk_blob_bdev.a 00:04:39.477 SO libspdk_keyring_linux.so.1.0 00:04:39.477 SO libspdk_blob_bdev.so.11.0 00:04:39.477 LIB libspdk_keyring_file.a 00:04:39.477 SO libspdk_keyring_file.so.2.0 00:04:39.477 LIB libspdk_accel_error.a 00:04:39.477 SYMLINK libspdk_keyring_linux.so 00:04:39.477 SYMLINK libspdk_blob_bdev.so 00:04:39.477 SO libspdk_accel_error.so.2.0 00:04:39.477 SYMLINK libspdk_keyring_file.so 00:04:39.477 CC module/accel/ioat/accel_ioat.o 00:04:39.477 CC module/accel/ioat/accel_ioat_rpc.o 00:04:39.477 SYMLINK libspdk_accel_error.so 00:04:39.477 CC module/accel/dsa/accel_dsa.o 00:04:39.736 CC module/accel/iaa/accel_iaa.o 00:04:39.736 CC module/bdev/error/vbdev_error.o 00:04:39.736 CC module/bdev/delay/vbdev_delay.o 00:04:39.736 CC module/bdev/gpt/gpt.o 00:04:39.736 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.736 LIB libspdk_accel_ioat.a 00:04:39.736 SO libspdk_accel_ioat.so.6.0 00:04:39.736 CC module/accel/dsa/accel_dsa_rpc.o 00:04:39.736 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.736 SYMLINK libspdk_accel_ioat.so 00:04:39.736 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:39.736 CC module/accel/iaa/accel_iaa_rpc.o 00:04:39.736 LIB libspdk_fsdev_aio.a 00:04:39.736 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.736 SO libspdk_fsdev_aio.so.1.0 00:04:39.994 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.994 LIB libspdk_accel_dsa.a 00:04:39.994 LIB libspdk_sock_posix.a 00:04:39.994 SO libspdk_accel_dsa.so.5.0 00:04:39.994 SO libspdk_sock_posix.so.6.0 00:04:39.994 SYMLINK libspdk_fsdev_aio.so 00:04:39.994 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.994 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.994 LIB libspdk_accel_iaa.a 00:04:39.994 SO libspdk_accel_iaa.so.3.0 00:04:39.994 SYMLINK libspdk_sock_posix.so 00:04:39.994 SYMLINK libspdk_accel_dsa.so 00:04:39.994 LIB libspdk_blobfs_bdev.a 00:04:39.994 SYMLINK libspdk_accel_iaa.so 00:04:39.994 SO libspdk_blobfs_bdev.so.6.0 00:04:39.994 SYMLINK libspdk_blobfs_bdev.so 00:04:39.994 LIB libspdk_bdev_delay.a 00:04:39.994 LIB libspdk_bdev_error.a 00:04:39.994 SO libspdk_bdev_delay.so.6.0 00:04:39.994 CC module/bdev/malloc/bdev_malloc.o 00:04:39.994 CC module/bdev/null/bdev_null.o 00:04:40.253 LIB libspdk_bdev_gpt.a 00:04:40.253 SO libspdk_bdev_error.so.6.0 00:04:40.253 SO libspdk_bdev_gpt.so.6.0 00:04:40.253 CC module/bdev/passthru/vbdev_passthru.o 00:04:40.253 SYMLINK libspdk_bdev_delay.so 00:04:40.253 CC module/bdev/nvme/bdev_nvme.o 00:04:40.253 SYMLINK libspdk_bdev_error.so 00:04:40.253 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:40.253 LIB libspdk_bdev_lvol.a 00:04:40.253 CC module/bdev/split/vbdev_split.o 00:04:40.253 SYMLINK libspdk_bdev_gpt.so 00:04:40.253 CC module/bdev/raid/bdev_raid.o 00:04:40.253 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.253 SO libspdk_bdev_lvol.so.6.0 00:04:40.253 SYMLINK libspdk_bdev_lvol.so 00:04:40.253 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.253 CC module/bdev/null/bdev_null_rpc.o 00:04:40.512 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:40.512 CC module/bdev/xnvme/bdev_xnvme.o 00:04:40.512 LIB libspdk_bdev_split.a 00:04:40.512 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:40.512 SO libspdk_bdev_split.so.6.0 00:04:40.512 LIB libspdk_bdev_null.a 00:04:40.512 CC module/bdev/aio/bdev_aio.o 00:04:40.512 SO libspdk_bdev_null.so.6.0 00:04:40.512 SYMLINK libspdk_bdev_split.so 00:04:40.512 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.512 SYMLINK libspdk_bdev_null.so 00:04:40.512 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.512 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.512 LIB libspdk_bdev_malloc.a 00:04:40.512 LIB libspdk_bdev_passthru.a 00:04:40.512 SO libspdk_bdev_malloc.so.6.0 00:04:40.512 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:40.512 SO libspdk_bdev_passthru.so.6.0 00:04:40.770 SYMLINK libspdk_bdev_malloc.so 00:04:40.770 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.770 CC module/bdev/raid/raid0.o 00:04:40.770 SYMLINK libspdk_bdev_passthru.so 00:04:40.770 CC module/bdev/nvme/nvme_rpc.o 00:04:40.770 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.770 LIB libspdk_bdev_xnvme.a 00:04:40.770 LIB libspdk_bdev_zone_block.a 00:04:40.770 LIB libspdk_bdev_aio.a 00:04:40.770 SO libspdk_bdev_aio.so.6.0 00:04:40.770 SO libspdk_bdev_xnvme.so.3.0 00:04:40.770 SO libspdk_bdev_zone_block.so.6.0 00:04:40.770 CC module/bdev/nvme/vbdev_opal.o 00:04:40.770 SYMLINK libspdk_bdev_aio.so 00:04:40.770 SYMLINK libspdk_bdev_xnvme.so 00:04:40.770 SYMLINK libspdk_bdev_zone_block.so 00:04:40.770 CC module/bdev/raid/raid1.o 00:04:41.029 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:41.029 CC module/bdev/raid/concat.o 00:04:41.029 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:41.029 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:41.029 CC module/bdev/ftl/bdev_ftl.o 00:04:41.029 CC module/bdev/iscsi/bdev_iscsi.o 00:04:41.029 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:41.029 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:41.029 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:41.029 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:41.029 LIB libspdk_bdev_raid.a 00:04:41.288 SO libspdk_bdev_raid.so.6.0 00:04:41.288 SYMLINK libspdk_bdev_raid.so 00:04:41.288 LIB libspdk_bdev_ftl.a 00:04:41.288 SO libspdk_bdev_ftl.so.6.0 00:04:41.288 LIB libspdk_bdev_iscsi.a 00:04:41.288 SO libspdk_bdev_iscsi.so.6.0 00:04:41.288 SYMLINK libspdk_bdev_ftl.so 00:04:41.288 SYMLINK libspdk_bdev_iscsi.so 00:04:41.288 LIB libspdk_bdev_virtio.a 00:04:41.288 SO libspdk_bdev_virtio.so.6.0 00:04:41.547 SYMLINK libspdk_bdev_virtio.so 00:04:42.483 LIB libspdk_bdev_nvme.a 00:04:42.483 SO libspdk_bdev_nvme.so.7.0 00:04:42.745 SYMLINK libspdk_bdev_nvme.so 00:04:43.005 CC module/event/subsystems/iobuf/iobuf.o 00:04:43.005 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:43.005 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:43.005 CC module/event/subsystems/scheduler/scheduler.o 00:04:43.005 CC module/event/subsystems/vmd/vmd.o 00:04:43.005 CC module/event/subsystems/keyring/keyring.o 00:04:43.005 CC module/event/subsystems/sock/sock.o 00:04:43.005 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:43.005 CC module/event/subsystems/fsdev/fsdev.o 00:04:43.263 LIB libspdk_event_keyring.a 00:04:43.263 LIB libspdk_event_vhost_blk.a 00:04:43.263 LIB libspdk_event_vmd.a 00:04:43.263 LIB libspdk_event_fsdev.a 00:04:43.263 LIB libspdk_event_sock.a 00:04:43.263 LIB libspdk_event_iobuf.a 00:04:43.263 LIB libspdk_event_scheduler.a 00:04:43.263 SO libspdk_event_vhost_blk.so.3.0 00:04:43.263 SO libspdk_event_keyring.so.1.0 00:04:43.263 SO libspdk_event_vmd.so.6.0 00:04:43.263 SO libspdk_event_fsdev.so.1.0 00:04:43.263 SO libspdk_event_sock.so.5.0 00:04:43.263 SO libspdk_event_scheduler.so.4.0 00:04:43.264 SO libspdk_event_iobuf.so.3.0 00:04:43.264 SYMLINK libspdk_event_keyring.so 00:04:43.264 SYMLINK libspdk_event_vhost_blk.so 00:04:43.264 SYMLINK libspdk_event_vmd.so 00:04:43.264 SYMLINK libspdk_event_scheduler.so 00:04:43.264 SYMLINK libspdk_event_fsdev.so 00:04:43.264 SYMLINK libspdk_event_sock.so 00:04:43.264 SYMLINK libspdk_event_iobuf.so 00:04:43.522 CC module/event/subsystems/accel/accel.o 00:04:43.522 LIB libspdk_event_accel.a 00:04:43.522 SO libspdk_event_accel.so.6.0 00:04:43.522 SYMLINK libspdk_event_accel.so 00:04:43.780 CC module/event/subsystems/bdev/bdev.o 00:04:44.038 LIB libspdk_event_bdev.a 00:04:44.038 SO libspdk_event_bdev.so.6.0 00:04:44.038 SYMLINK libspdk_event_bdev.so 00:04:44.296 CC module/event/subsystems/nbd/nbd.o 00:04:44.296 CC module/event/subsystems/scsi/scsi.o 00:04:44.296 CC module/event/subsystems/ublk/ublk.o 00:04:44.296 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:44.296 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:44.296 LIB libspdk_event_ublk.a 00:04:44.296 LIB libspdk_event_nbd.a 00:04:44.296 SO libspdk_event_ublk.so.3.0 00:04:44.296 SO libspdk_event_nbd.so.6.0 00:04:44.296 LIB libspdk_event_scsi.a 00:04:44.296 SO libspdk_event_scsi.so.6.0 00:04:44.296 SYMLINK libspdk_event_ublk.so 00:04:44.296 SYMLINK libspdk_event_nbd.so 00:04:44.553 SYMLINK libspdk_event_scsi.so 00:04:44.553 LIB libspdk_event_nvmf.a 00:04:44.553 SO libspdk_event_nvmf.so.6.0 00:04:44.553 SYMLINK libspdk_event_nvmf.so 00:04:44.554 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:44.554 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.812 LIB libspdk_event_vhost_scsi.a 00:04:44.812 SO libspdk_event_vhost_scsi.so.3.0 00:04:44.812 LIB libspdk_event_iscsi.a 00:04:44.812 SYMLINK libspdk_event_vhost_scsi.so 00:04:44.812 SO libspdk_event_iscsi.so.6.0 00:04:44.812 SYMLINK libspdk_event_iscsi.so 00:04:45.094 SO libspdk.so.6.0 00:04:45.094 SYMLINK libspdk.so 00:04:45.094 CXX app/trace/trace.o 00:04:45.094 CC test/rpc_client/rpc_client_test.o 00:04:45.094 TEST_HEADER include/spdk/accel.h 00:04:45.094 TEST_HEADER include/spdk/accel_module.h 00:04:45.094 TEST_HEADER include/spdk/assert.h 00:04:45.094 TEST_HEADER include/spdk/barrier.h 00:04:45.094 TEST_HEADER include/spdk/base64.h 00:04:45.094 TEST_HEADER include/spdk/bdev.h 00:04:45.354 TEST_HEADER include/spdk/bdev_module.h 00:04:45.354 TEST_HEADER include/spdk/bdev_zone.h 00:04:45.354 TEST_HEADER include/spdk/bit_array.h 00:04:45.354 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:45.354 TEST_HEADER include/spdk/bit_pool.h 00:04:45.354 TEST_HEADER include/spdk/blob_bdev.h 00:04:45.354 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:45.354 TEST_HEADER include/spdk/blobfs.h 00:04:45.354 TEST_HEADER include/spdk/blob.h 00:04:45.354 TEST_HEADER include/spdk/conf.h 00:04:45.354 TEST_HEADER include/spdk/config.h 00:04:45.354 TEST_HEADER include/spdk/cpuset.h 00:04:45.354 TEST_HEADER include/spdk/crc16.h 00:04:45.354 TEST_HEADER include/spdk/crc32.h 00:04:45.354 TEST_HEADER include/spdk/crc64.h 00:04:45.354 TEST_HEADER include/spdk/dif.h 00:04:45.354 TEST_HEADER include/spdk/dma.h 00:04:45.354 TEST_HEADER include/spdk/endian.h 00:04:45.354 TEST_HEADER include/spdk/env_dpdk.h 00:04:45.354 TEST_HEADER include/spdk/env.h 00:04:45.354 TEST_HEADER include/spdk/event.h 00:04:45.354 TEST_HEADER include/spdk/fd_group.h 00:04:45.354 TEST_HEADER include/spdk/fd.h 00:04:45.354 TEST_HEADER include/spdk/file.h 00:04:45.354 CC examples/ioat/perf/perf.o 00:04:45.354 TEST_HEADER include/spdk/fsdev.h 00:04:45.354 TEST_HEADER include/spdk/fsdev_module.h 00:04:45.354 TEST_HEADER include/spdk/ftl.h 00:04:45.354 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:45.354 TEST_HEADER include/spdk/gpt_spec.h 00:04:45.354 CC examples/util/zipf/zipf.o 00:04:45.354 CC test/thread/poller_perf/poller_perf.o 00:04:45.354 TEST_HEADER include/spdk/hexlify.h 00:04:45.354 TEST_HEADER include/spdk/histogram_data.h 00:04:45.354 TEST_HEADER include/spdk/idxd.h 00:04:45.354 TEST_HEADER include/spdk/idxd_spec.h 00:04:45.354 TEST_HEADER include/spdk/init.h 00:04:45.354 TEST_HEADER include/spdk/ioat.h 00:04:45.354 TEST_HEADER include/spdk/ioat_spec.h 00:04:45.354 TEST_HEADER include/spdk/iscsi_spec.h 00:04:45.354 TEST_HEADER include/spdk/json.h 00:04:45.354 TEST_HEADER include/spdk/jsonrpc.h 00:04:45.354 TEST_HEADER include/spdk/keyring.h 00:04:45.354 TEST_HEADER include/spdk/keyring_module.h 00:04:45.354 TEST_HEADER include/spdk/likely.h 00:04:45.355 TEST_HEADER include/spdk/log.h 00:04:45.355 TEST_HEADER include/spdk/lvol.h 00:04:45.355 TEST_HEADER include/spdk/md5.h 00:04:45.355 TEST_HEADER include/spdk/memory.h 00:04:45.355 TEST_HEADER include/spdk/mmio.h 00:04:45.355 TEST_HEADER include/spdk/nbd.h 00:04:45.355 TEST_HEADER include/spdk/net.h 00:04:45.355 CC test/app/bdev_svc/bdev_svc.o 00:04:45.355 TEST_HEADER include/spdk/notify.h 00:04:45.355 TEST_HEADER include/spdk/nvme.h 00:04:45.355 TEST_HEADER include/spdk/nvme_intel.h 00:04:45.355 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:45.355 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:45.355 TEST_HEADER include/spdk/nvme_spec.h 00:04:45.355 TEST_HEADER include/spdk/nvme_zns.h 00:04:45.355 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:45.355 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:45.355 TEST_HEADER include/spdk/nvmf.h 00:04:45.355 TEST_HEADER include/spdk/nvmf_spec.h 00:04:45.355 TEST_HEADER include/spdk/nvmf_transport.h 00:04:45.355 CC test/dma/test_dma/test_dma.o 00:04:45.355 TEST_HEADER include/spdk/opal.h 00:04:45.355 TEST_HEADER include/spdk/opal_spec.h 00:04:45.355 TEST_HEADER include/spdk/pci_ids.h 00:04:45.355 TEST_HEADER include/spdk/pipe.h 00:04:45.355 TEST_HEADER include/spdk/queue.h 00:04:45.355 TEST_HEADER include/spdk/reduce.h 00:04:45.355 TEST_HEADER include/spdk/rpc.h 00:04:45.355 CC test/env/mem_callbacks/mem_callbacks.o 00:04:45.355 TEST_HEADER include/spdk/scheduler.h 00:04:45.355 TEST_HEADER include/spdk/scsi.h 00:04:45.355 TEST_HEADER include/spdk/scsi_spec.h 00:04:45.355 TEST_HEADER include/spdk/sock.h 00:04:45.355 TEST_HEADER include/spdk/stdinc.h 00:04:45.355 TEST_HEADER include/spdk/string.h 00:04:45.355 TEST_HEADER include/spdk/thread.h 00:04:45.355 TEST_HEADER include/spdk/trace.h 00:04:45.355 TEST_HEADER include/spdk/trace_parser.h 00:04:45.355 TEST_HEADER include/spdk/tree.h 00:04:45.355 TEST_HEADER include/spdk/ublk.h 00:04:45.355 TEST_HEADER include/spdk/util.h 00:04:45.355 TEST_HEADER include/spdk/uuid.h 00:04:45.355 TEST_HEADER include/spdk/version.h 00:04:45.355 LINK rpc_client_test 00:04:45.355 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:45.355 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:45.355 TEST_HEADER include/spdk/vhost.h 00:04:45.355 TEST_HEADER include/spdk/vmd.h 00:04:45.355 TEST_HEADER include/spdk/xor.h 00:04:45.355 TEST_HEADER include/spdk/zipf.h 00:04:45.355 CXX test/cpp_headers/accel.o 00:04:45.355 LINK poller_perf 00:04:45.355 LINK zipf 00:04:45.355 LINK interrupt_tgt 00:04:45.616 LINK bdev_svc 00:04:45.616 LINK ioat_perf 00:04:45.616 CXX test/cpp_headers/accel_module.o 00:04:45.616 LINK spdk_trace 00:04:45.616 CC app/trace_record/trace_record.o 00:04:45.616 CC app/nvmf_tgt/nvmf_main.o 00:04:45.616 CXX test/cpp_headers/assert.o 00:04:45.616 CC test/event/event_perf/event_perf.o 00:04:45.616 CC examples/ioat/verify/verify.o 00:04:45.878 LINK test_dma 00:04:45.878 CC examples/thread/thread/thread_ex.o 00:04:45.878 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.878 LINK mem_callbacks 00:04:45.878 CXX test/cpp_headers/barrier.o 00:04:45.878 LINK nvmf_tgt 00:04:45.878 CC app/iscsi_tgt/iscsi_tgt.o 00:04:45.878 LINK spdk_trace_record 00:04:45.878 LINK event_perf 00:04:45.878 LINK verify 00:04:46.137 CXX test/cpp_headers/base64.o 00:04:46.137 LINK iscsi_tgt 00:04:46.137 CC test/env/vtophys/vtophys.o 00:04:46.137 LINK thread 00:04:46.137 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:46.137 CXX test/cpp_headers/bdev.o 00:04:46.137 CC test/event/reactor/reactor.o 00:04:46.137 CC app/spdk_lspci/spdk_lspci.o 00:04:46.137 CC app/spdk_tgt/spdk_tgt.o 00:04:46.137 LINK nvme_fuzz 00:04:46.137 LINK vtophys 00:04:46.395 LINK env_dpdk_post_init 00:04:46.395 CC examples/sock/hello_world/hello_sock.o 00:04:46.395 LINK reactor 00:04:46.395 LINK spdk_lspci 00:04:46.395 CXX test/cpp_headers/bdev_module.o 00:04:46.395 CC examples/vmd/lsvmd/lsvmd.o 00:04:46.395 CC test/accel/dif/dif.o 00:04:46.395 LINK spdk_tgt 00:04:46.395 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:46.396 CC test/app/histogram_perf/histogram_perf.o 00:04:46.396 CC test/env/memory/memory_ut.o 00:04:46.396 CC test/event/reactor_perf/reactor_perf.o 00:04:46.396 CXX test/cpp_headers/bdev_zone.o 00:04:46.396 CC test/app/jsoncat/jsoncat.o 00:04:46.396 LINK lsvmd 00:04:46.654 LINK hello_sock 00:04:46.654 LINK histogram_perf 00:04:46.654 LINK reactor_perf 00:04:46.654 LINK jsoncat 00:04:46.654 CC app/spdk_nvme_perf/perf.o 00:04:46.654 CXX test/cpp_headers/bit_array.o 00:04:46.654 CC test/app/stub/stub.o 00:04:46.654 CC examples/vmd/led/led.o 00:04:46.912 CC test/event/app_repeat/app_repeat.o 00:04:46.912 CXX test/cpp_headers/bit_pool.o 00:04:46.912 CC examples/idxd/perf/perf.o 00:04:46.912 LINK stub 00:04:46.912 LINK led 00:04:46.912 CXX test/cpp_headers/blob_bdev.o 00:04:46.912 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:46.912 LINK app_repeat 00:04:46.912 LINK dif 00:04:46.912 CXX test/cpp_headers/blobfs_bdev.o 00:04:47.170 CC test/blobfs/mkfs/mkfs.o 00:04:47.170 LINK idxd_perf 00:04:47.170 CXX test/cpp_headers/blobfs.o 00:04:47.170 CC test/event/scheduler/scheduler.o 00:04:47.170 LINK hello_fsdev 00:04:47.170 CC examples/accel/perf/accel_perf.o 00:04:47.170 CC test/lvol/esnap/esnap.o 00:04:47.170 CXX test/cpp_headers/blob.o 00:04:47.427 LINK memory_ut 00:04:47.427 LINK mkfs 00:04:47.427 CC test/nvme/aer/aer.o 00:04:47.427 LINK scheduler 00:04:47.427 CXX test/cpp_headers/conf.o 00:04:47.427 CC test/env/pci/pci_ut.o 00:04:47.427 CXX test/cpp_headers/config.o 00:04:47.427 CC test/bdev/bdevio/bdevio.o 00:04:47.427 LINK spdk_nvme_perf 00:04:47.427 CXX test/cpp_headers/cpuset.o 00:04:47.685 CXX test/cpp_headers/crc16.o 00:04:47.685 LINK aer 00:04:47.685 CC examples/blob/hello_world/hello_blob.o 00:04:47.685 CC examples/nvme/hello_world/hello_world.o 00:04:47.685 CC app/spdk_nvme_identify/identify.o 00:04:47.685 LINK accel_perf 00:04:47.685 LINK iscsi_fuzz 00:04:47.685 CXX test/cpp_headers/crc32.o 00:04:47.685 CC test/nvme/reset/reset.o 00:04:47.944 CXX test/cpp_headers/crc64.o 00:04:47.944 LINK pci_ut 00:04:47.944 LINK bdevio 00:04:47.944 LINK hello_blob 00:04:47.944 LINK hello_world 00:04:47.944 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:47.944 CC app/spdk_nvme_discover/discovery_aer.o 00:04:47.944 CXX test/cpp_headers/dif.o 00:04:47.944 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:47.944 CC app/spdk_top/spdk_top.o 00:04:47.944 LINK reset 00:04:47.944 CC test/nvme/sgl/sgl.o 00:04:48.203 LINK spdk_nvme_discover 00:04:48.203 CC examples/nvme/reconnect/reconnect.o 00:04:48.203 CC examples/blob/cli/blobcli.o 00:04:48.203 CXX test/cpp_headers/dma.o 00:04:48.203 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:48.203 CXX test/cpp_headers/endian.o 00:04:48.203 LINK sgl 00:04:48.203 CC examples/nvme/arbitration/arbitration.o 00:04:48.462 LINK vhost_fuzz 00:04:48.462 CXX test/cpp_headers/env_dpdk.o 00:04:48.462 LINK reconnect 00:04:48.462 LINK spdk_nvme_identify 00:04:48.462 CC test/nvme/e2edp/nvme_dp.o 00:04:48.462 CC examples/nvme/hotplug/hotplug.o 00:04:48.462 CXX test/cpp_headers/env.o 00:04:48.721 CXX test/cpp_headers/event.o 00:04:48.721 CXX test/cpp_headers/fd_group.o 00:04:48.721 LINK blobcli 00:04:48.721 LINK arbitration 00:04:48.721 LINK nvme_manage 00:04:48.721 CC app/vhost/vhost.o 00:04:48.721 CXX test/cpp_headers/fd.o 00:04:48.721 CXX test/cpp_headers/file.o 00:04:48.721 LINK nvme_dp 00:04:48.721 LINK hotplug 00:04:48.721 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:48.721 CXX test/cpp_headers/fsdev.o 00:04:48.980 CC app/spdk_dd/spdk_dd.o 00:04:48.980 CC app/fio/nvme/fio_plugin.o 00:04:48.980 LINK vhost 00:04:48.980 CC examples/nvme/abort/abort.o 00:04:48.980 CC test/nvme/overhead/overhead.o 00:04:48.980 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:48.980 LINK cmb_copy 00:04:48.980 LINK spdk_top 00:04:48.980 CXX test/cpp_headers/fsdev_module.o 00:04:48.980 CXX test/cpp_headers/ftl.o 00:04:48.980 CXX test/cpp_headers/fuse_dispatcher.o 00:04:48.980 CXX test/cpp_headers/gpt_spec.o 00:04:49.238 LINK pmr_persistence 00:04:49.238 CXX test/cpp_headers/hexlify.o 00:04:49.238 CXX test/cpp_headers/histogram_data.o 00:04:49.238 CXX test/cpp_headers/idxd.o 00:04:49.238 LINK spdk_dd 00:04:49.238 LINK overhead 00:04:49.238 CXX test/cpp_headers/idxd_spec.o 00:04:49.238 CXX test/cpp_headers/init.o 00:04:49.238 LINK abort 00:04:49.238 LINK spdk_nvme 00:04:49.238 CC app/fio/bdev/fio_plugin.o 00:04:49.238 CXX test/cpp_headers/ioat.o 00:04:49.497 CXX test/cpp_headers/ioat_spec.o 00:04:49.497 CXX test/cpp_headers/iscsi_spec.o 00:04:49.497 CXX test/cpp_headers/json.o 00:04:49.497 CC test/nvme/err_injection/err_injection.o 00:04:49.497 CC test/nvme/startup/startup.o 00:04:49.497 CC examples/bdev/hello_world/hello_bdev.o 00:04:49.497 CXX test/cpp_headers/jsonrpc.o 00:04:49.497 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.497 CXX test/cpp_headers/keyring.o 00:04:49.497 CXX test/cpp_headers/keyring_module.o 00:04:49.497 CXX test/cpp_headers/likely.o 00:04:49.497 CXX test/cpp_headers/log.o 00:04:49.497 LINK startup 00:04:49.756 CXX test/cpp_headers/lvol.o 00:04:49.756 LINK err_injection 00:04:49.756 LINK hello_bdev 00:04:49.756 CXX test/cpp_headers/md5.o 00:04:49.756 LINK spdk_bdev 00:04:49.756 CC test/nvme/reserve/reserve.o 00:04:49.756 CC test/nvme/connect_stress/connect_stress.o 00:04:49.756 CC test/nvme/simple_copy/simple_copy.o 00:04:49.756 CC test/nvme/boot_partition/boot_partition.o 00:04:50.015 CC test/nvme/compliance/nvme_compliance.o 00:04:50.015 CC test/nvme/fused_ordering/fused_ordering.o 00:04:50.015 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:50.015 LINK connect_stress 00:04:50.015 CXX test/cpp_headers/memory.o 00:04:50.015 LINK reserve 00:04:50.015 LINK boot_partition 00:04:50.015 LINK simple_copy 00:04:50.015 LINK fused_ordering 00:04:50.015 CXX test/cpp_headers/mmio.o 00:04:50.015 LINK doorbell_aers 00:04:50.273 CXX test/cpp_headers/nbd.o 00:04:50.273 CC test/nvme/fdp/fdp.o 00:04:50.273 CXX test/cpp_headers/net.o 00:04:50.273 CXX test/cpp_headers/notify.o 00:04:50.273 CC test/nvme/cuse/cuse.o 00:04:50.273 CXX test/cpp_headers/nvme.o 00:04:50.273 LINK nvme_compliance 00:04:50.273 CXX test/cpp_headers/nvme_intel.o 00:04:50.273 LINK bdevperf 00:04:50.273 CXX test/cpp_headers/nvme_ocssd.o 00:04:50.273 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:50.273 CXX test/cpp_headers/nvme_spec.o 00:04:50.273 CXX test/cpp_headers/nvme_zns.o 00:04:50.273 CXX test/cpp_headers/nvmf_cmd.o 00:04:50.273 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:50.532 CXX test/cpp_headers/nvmf.o 00:04:50.532 CXX test/cpp_headers/nvmf_spec.o 00:04:50.532 CXX test/cpp_headers/nvmf_transport.o 00:04:50.532 CXX test/cpp_headers/opal.o 00:04:50.532 LINK fdp 00:04:50.532 CXX test/cpp_headers/opal_spec.o 00:04:50.532 CXX test/cpp_headers/pci_ids.o 00:04:50.532 CXX test/cpp_headers/pipe.o 00:04:50.532 CC examples/nvmf/nvmf/nvmf.o 00:04:50.532 CXX test/cpp_headers/queue.o 00:04:50.532 CXX test/cpp_headers/reduce.o 00:04:50.532 CXX test/cpp_headers/rpc.o 00:04:50.532 CXX test/cpp_headers/scheduler.o 00:04:50.532 CXX test/cpp_headers/scsi.o 00:04:50.532 CXX test/cpp_headers/scsi_spec.o 00:04:50.791 CXX test/cpp_headers/sock.o 00:04:50.791 CXX test/cpp_headers/stdinc.o 00:04:50.791 CXX test/cpp_headers/string.o 00:04:50.791 CXX test/cpp_headers/thread.o 00:04:50.791 LINK nvmf 00:04:50.791 CXX test/cpp_headers/trace.o 00:04:50.791 CXX test/cpp_headers/trace_parser.o 00:04:50.791 CXX test/cpp_headers/tree.o 00:04:50.791 CXX test/cpp_headers/ublk.o 00:04:50.791 CXX test/cpp_headers/util.o 00:04:50.791 CXX test/cpp_headers/uuid.o 00:04:50.791 CXX test/cpp_headers/version.o 00:04:50.791 CXX test/cpp_headers/vfio_user_pci.o 00:04:50.791 CXX test/cpp_headers/vfio_user_spec.o 00:04:51.051 CXX test/cpp_headers/vhost.o 00:04:51.051 CXX test/cpp_headers/vmd.o 00:04:51.051 CXX test/cpp_headers/xor.o 00:04:51.051 CXX test/cpp_headers/zipf.o 00:04:51.051 LINK cuse 00:04:51.624 LINK esnap 00:04:52.194 00:04:52.194 real 1m0.894s 00:04:52.194 user 5m3.727s 00:04:52.194 sys 0m58.562s 00:04:52.194 00:36:43 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:52.194 ************************************ 00:04:52.194 END TEST make 00:04:52.194 ************************************ 00:04:52.194 00:36:43 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.194 00:36:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.194 00:36:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.194 00:36:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.194 00:36:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.194 00:36:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.194 00:36:44 -- pm/common@44 -- $ pid=5805 00:04:52.194 00:36:44 -- pm/common@50 -- $ kill -TERM 5805 00:04:52.194 00:36:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.194 00:36:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.194 00:36:44 -- pm/common@44 -- $ pid=5806 00:04:52.194 00:36:44 -- pm/common@50 -- $ kill -TERM 5806 00:04:52.194 00:36:44 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:52.194 00:36:44 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:52.194 00:36:44 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:52.194 00:36:44 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:52.194 00:36:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.194 00:36:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.194 00:36:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.194 00:36:44 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.194 00:36:44 -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.194 00:36:44 -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.194 00:36:44 -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.194 00:36:44 -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.194 00:36:44 -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.194 00:36:44 -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.194 00:36:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.194 00:36:44 -- scripts/common.sh@344 -- # case "$op" in 00:04:52.194 00:36:44 -- scripts/common.sh@345 -- # : 1 00:04:52.194 00:36:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.194 00:36:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.194 00:36:44 -- scripts/common.sh@365 -- # decimal 1 00:04:52.194 00:36:44 -- scripts/common.sh@353 -- # local d=1 00:04:52.194 00:36:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.194 00:36:44 -- scripts/common.sh@355 -- # echo 1 00:04:52.194 00:36:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.194 00:36:44 -- scripts/common.sh@366 -- # decimal 2 00:04:52.194 00:36:44 -- scripts/common.sh@353 -- # local d=2 00:04:52.194 00:36:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.194 00:36:44 -- scripts/common.sh@355 -- # echo 2 00:04:52.194 00:36:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.194 00:36:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.194 00:36:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.194 00:36:44 -- scripts/common.sh@368 -- # return 0 00:04:52.194 00:36:44 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.194 00:36:44 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 00:36:44 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 00:36:44 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 00:36:44 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 00:36:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.194 00:36:44 -- nvmf/common.sh@7 -- # uname -s 00:04:52.194 00:36:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.194 00:36:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.194 00:36:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.194 00:36:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.194 00:36:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.194 00:36:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.194 00:36:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.194 00:36:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.194 00:36:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.194 00:36:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.194 00:36:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5bb242f-1de1-40dd-90e9-47f53f9db552 00:04:52.194 00:36:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=e5bb242f-1de1-40dd-90e9-47f53f9db552 00:04:52.194 00:36:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.194 00:36:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.194 00:36:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.195 00:36:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.195 00:36:44 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.195 00:36:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.195 00:36:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.195 00:36:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.195 00:36:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.195 00:36:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.195 00:36:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.195 00:36:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.195 00:36:44 -- paths/export.sh@5 -- # export PATH 00:04:52.195 00:36:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.195 00:36:44 -- nvmf/common.sh@51 -- # : 0 00:04:52.195 00:36:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.195 00:36:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.195 00:36:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.195 00:36:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.195 00:36:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.195 00:36:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.195 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.195 00:36:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.195 00:36:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.195 00:36:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.195 00:36:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.195 00:36:44 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.195 00:36:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.195 00:36:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.195 00:36:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.195 00:36:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.195 00:36:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.195 00:36:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.456 00:36:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.456 00:36:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.456 00:36:44 -- spdk/autotest.sh@48 -- # udevadm_pid=66973 00:04:52.456 00:36:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.456 00:36:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.456 00:36:44 -- pm/common@17 -- # local monitor 00:04:52.456 00:36:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.456 00:36:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.456 00:36:44 -- pm/common@25 -- # sleep 1 00:04:52.456 00:36:44 -- pm/common@21 -- # date +%s 00:04:52.456 00:36:44 -- pm/common@21 -- # date +%s 00:04:52.456 00:36:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731803804 00:04:52.456 00:36:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731803804 00:04:52.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731803804_collect-cpu-load.pm.log 00:04:52.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731803804_collect-vmstat.pm.log 00:04:53.401 00:36:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:53.401 00:36:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:53.401 00:36:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.401 00:36:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.401 00:36:45 -- spdk/autotest.sh@59 -- # create_test_list 00:04:53.401 00:36:45 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:53.401 00:36:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.401 00:36:45 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:53.401 00:36:45 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:53.401 00:36:45 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:53.401 00:36:45 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:53.401 00:36:45 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:53.401 00:36:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:53.401 00:36:45 -- common/autotest_common.sh@1455 -- # uname 00:04:53.401 00:36:45 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:53.401 00:36:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:53.401 00:36:45 -- common/autotest_common.sh@1475 -- # uname 00:04:53.401 00:36:45 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:53.401 00:36:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:53.401 00:36:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:53.401 lcov: LCOV version 1.15 00:04:53.401 00:36:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:08.290 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:08.290 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:23.176 00:37:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:23.176 00:37:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.176 00:37:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.176 00:37:14 -- spdk/autotest.sh@78 -- # rm -f 00:05:23.176 00:37:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.435 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:23.435 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:23.435 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:23.435 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:23.435 00:37:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:23.435 00:37:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:23.435 00:37:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:23.435 00:37:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:23.435 00:37:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.435 00:37:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:23.435 00:37:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:23.435 00:37:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.435 00:37:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:23.435 00:37:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:23.435 00:37:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.435 00:37:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:23.435 00:37:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:23.435 00:37:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.435 00:37:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:23.435 00:37:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:23.435 00:37:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.435 00:37:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.435 00:37:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:23.435 00:37:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:23.435 00:37:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:23.436 00:37:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.436 00:37:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.436 00:37:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:23.436 00:37:15 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:23.436 00:37:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:23.436 00:37:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.436 00:37:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.436 00:37:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:23.436 00:37:15 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:23.436 00:37:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:23.436 00:37:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.436 00:37:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:23.436 00:37:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.436 00:37:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.436 00:37:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:23.436 00:37:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:23.436 00:37:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:23.436 No valid GPT data, bailing 00:05:23.436 00:37:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.436 00:37:15 -- scripts/common.sh@394 -- # pt= 00:05:23.436 00:37:15 -- scripts/common.sh@395 -- # return 1 00:05:23.436 00:37:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:23.436 1+0 records in 00:05:23.436 1+0 records out 00:05:23.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124432 s, 84.3 MB/s 00:05:23.436 00:37:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.436 00:37:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.436 00:37:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:23.436 00:37:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:23.436 00:37:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:23.436 No valid GPT data, bailing 00:05:23.436 00:37:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # pt= 00:05:23.695 00:37:15 -- scripts/common.sh@395 -- # return 1 00:05:23.695 00:37:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:23.695 1+0 records in 00:05:23.695 1+0 records out 00:05:23.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556993 s, 188 MB/s 00:05:23.695 00:37:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.695 00:37:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.695 00:37:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:23.695 00:37:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:23.695 00:37:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:23.695 No valid GPT data, bailing 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # pt= 00:05:23.695 00:37:15 -- scripts/common.sh@395 -- # return 1 00:05:23.695 00:37:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:23.695 1+0 records in 00:05:23.695 1+0 records out 00:05:23.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00912989 s, 115 MB/s 00:05:23.695 00:37:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.695 00:37:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.695 00:37:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:23.695 00:37:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:23.695 00:37:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:23.695 No valid GPT data, bailing 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # pt= 00:05:23.695 00:37:15 -- scripts/common.sh@395 -- # return 1 00:05:23.695 00:37:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:23.695 1+0 records in 00:05:23.695 1+0 records out 00:05:23.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669165 s, 157 MB/s 00:05:23.695 00:37:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.695 00:37:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.695 00:37:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:23.695 00:37:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:23.695 00:37:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:23.695 No valid GPT data, bailing 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # pt= 00:05:23.695 00:37:15 -- scripts/common.sh@395 -- # return 1 00:05:23.695 00:37:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:23.695 1+0 records in 00:05:23.695 1+0 records out 00:05:23.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049891 s, 210 MB/s 00:05:23.695 00:37:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.695 00:37:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.695 00:37:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:23.695 00:37:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:23.695 00:37:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:23.695 No valid GPT data, bailing 00:05:23.695 00:37:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:23.954 00:37:15 -- scripts/common.sh@394 -- # pt= 00:05:23.954 00:37:15 -- scripts/common.sh@395 -- # return 1 00:05:23.954 00:37:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:23.954 1+0 records in 00:05:23.954 1+0 records out 00:05:23.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502064 s, 209 MB/s 00:05:23.954 00:37:15 -- spdk/autotest.sh@105 -- # sync 00:05:23.954 00:37:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:23.954 00:37:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:23.954 00:37:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:25.855 00:37:17 -- spdk/autotest.sh@111 -- # uname -s 00:05:25.855 00:37:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:25.855 00:37:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:25.855 00:37:17 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:25.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.423 Hugepages 00:05:26.423 node hugesize free / total 00:05:26.423 node0 1048576kB 0 / 0 00:05:26.423 node0 2048kB 0 / 0 00:05:26.423 00:05:26.423 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:26.423 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:26.423 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:26.423 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:26.423 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:26.683 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:26.683 00:37:18 -- spdk/autotest.sh@117 -- # uname -s 00:05:26.683 00:37:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:26.683 00:37:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:26.683 00:37:18 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.518 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.779 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.779 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.779 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.779 00:37:19 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:28.722 00:37:20 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:28.722 00:37:20 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:28.722 00:37:20 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.722 00:37:20 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:28.722 00:37:20 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:28.722 00:37:20 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:28.722 00:37:20 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.722 00:37:20 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:28.722 00:37:20 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:28.984 00:37:20 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:28.984 00:37:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:28.984 00:37:20 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.508 Waiting for block devices as requested 00:05:29.508 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.508 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.508 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.508 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.802 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:34.802 00:37:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:34.802 00:37:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:34.802 00:37:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:34.802 00:37:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:34.802 00:37:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:34.802 00:37:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:34.802 00:37:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:34.802 00:37:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:34.802 00:37:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:34.802 00:37:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:34.802 00:37:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:34.802 00:37:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:34.802 00:37:26 -- common/autotest_common.sh@1541 -- # continue 00:05:34.802 00:37:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:34.802 00:37:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:34.802 00:37:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:34.802 00:37:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:34.802 00:37:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:34.802 00:37:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:34.802 00:37:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:34.802 00:37:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:34.802 00:37:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:34.803 00:37:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1541 -- # continue 00:05:34.803 00:37:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:34.803 00:37:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:34.803 00:37:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:34.803 00:37:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:34.803 00:37:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:34.803 00:37:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:34.803 00:37:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1541 -- # continue 00:05:34.803 00:37:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:34.803 00:37:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:34.803 00:37:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:34.803 00:37:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:34.803 00:37:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:34.803 00:37:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:34.803 00:37:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:34.803 00:37:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:34.803 00:37:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:34.803 00:37:26 -- common/autotest_common.sh@1541 -- # continue 00:05:34.803 00:37:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:34.803 00:37:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.803 00:37:26 -- common/autotest_common.sh@10 -- # set +x 00:05:34.803 00:37:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:34.803 00:37:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.803 00:37:26 -- common/autotest_common.sh@10 -- # set +x 00:05:34.803 00:37:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.947 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.947 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.947 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.947 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.208 00:37:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:36.208 00:37:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.208 00:37:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.208 00:37:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:36.208 00:37:28 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:36.208 00:37:28 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.208 00:37:28 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:36.208 00:37:28 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:36.208 00:37:28 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:36.208 00:37:28 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:36.208 00:37:28 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:36.208 00:37:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:36.208 00:37:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:36.208 00:37:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.208 00:37:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.208 00:37:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:36.208 00:37:28 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:36.208 00:37:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.208 00:37:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.208 00:37:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.208 00:37:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.208 00:37:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.208 00:37:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.208 00:37:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.208 00:37:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:36.208 00:37:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.208 00:37:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.208 00:37:28 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:36.208 00:37:28 -- common/autotest_common.sh@1570 -- # return 0 00:05:36.208 00:37:28 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:36.208 00:37:28 -- common/autotest_common.sh@1578 -- # return 0 00:05:36.208 00:37:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:36.208 00:37:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:36.208 00:37:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.208 00:37:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.208 00:37:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:36.208 00:37:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.208 00:37:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.208 00:37:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:36.208 00:37:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.208 00:37:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.208 00:37:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.208 00:37:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.208 ************************************ 00:05:36.208 START TEST env 00:05:36.208 ************************************ 00:05:36.208 00:37:28 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.470 * Looking for test storage... 00:05:36.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.470 00:37:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.470 00:37:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.470 00:37:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.470 00:37:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.470 00:37:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.470 00:37:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.470 00:37:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.470 00:37:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.470 00:37:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.470 00:37:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.470 00:37:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.470 00:37:28 env -- scripts/common.sh@344 -- # case "$op" in 00:05:36.470 00:37:28 env -- scripts/common.sh@345 -- # : 1 00:05:36.470 00:37:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.470 00:37:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.470 00:37:28 env -- scripts/common.sh@365 -- # decimal 1 00:05:36.470 00:37:28 env -- scripts/common.sh@353 -- # local d=1 00:05:36.470 00:37:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.470 00:37:28 env -- scripts/common.sh@355 -- # echo 1 00:05:36.470 00:37:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.470 00:37:28 env -- scripts/common.sh@366 -- # decimal 2 00:05:36.470 00:37:28 env -- scripts/common.sh@353 -- # local d=2 00:05:36.470 00:37:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.470 00:37:28 env -- scripts/common.sh@355 -- # echo 2 00:05:36.470 00:37:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.470 00:37:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.470 00:37:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.470 00:37:28 env -- scripts/common.sh@368 -- # return 0 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 00:37:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.470 00:37:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.470 00:37:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.470 ************************************ 00:05:36.470 START TEST env_memory 00:05:36.470 ************************************ 00:05:36.470 00:37:28 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:36.471 00:05:36.471 00:05:36.471 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.471 http://cunit.sourceforge.net/ 00:05:36.471 00:05:36.471 00:05:36.471 Suite: memory 00:05:36.471 Test: alloc and free memory map ...[2024-11-17 00:37:28.462628] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:36.471 passed 00:05:36.471 Test: mem map translation ...[2024-11-17 00:37:28.501679] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:36.471 [2024-11-17 00:37:28.501743] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:36.471 [2024-11-17 00:37:28.501806] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:36.471 [2024-11-17 00:37:28.501823] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.732 passed 00:05:36.732 Test: mem map registration ...[2024-11-17 00:37:28.570853] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:36.732 [2024-11-17 00:37:28.570912] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:36.732 passed 00:05:36.732 Test: mem map adjacent registrations ...passed 00:05:36.732 00:05:36.732 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.732 suites 1 1 n/a 0 0 00:05:36.732 tests 4 4 4 0 0 00:05:36.732 asserts 152 152 152 0 n/a 00:05:36.732 00:05:36.732 Elapsed time = 0.236 seconds 00:05:36.732 00:05:36.732 real 0m0.275s 00:05:36.732 user 0m0.248s 00:05:36.732 sys 0m0.019s 00:05:36.732 00:37:28 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.732 00:37:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:36.732 ************************************ 00:05:36.732 END TEST env_memory 00:05:36.732 ************************************ 00:05:36.732 00:37:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.732 00:37:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.732 00:37:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.732 00:37:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.732 ************************************ 00:05:36.732 START TEST env_vtophys 00:05:36.732 ************************************ 00:05:36.732 00:37:28 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.732 EAL: lib.eal log level changed from notice to debug 00:05:36.732 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 1 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 2 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 3 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 4 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 5 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 6 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 7 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 8 as core 0 on socket 0 00:05:36.732 EAL: Detected lcore 9 as core 0 on socket 0 00:05:36.732 EAL: Maximum logical cores by configuration: 128 00:05:36.732 EAL: Detected CPU lcores: 10 00:05:36.732 EAL: Detected NUMA nodes: 1 00:05:36.732 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:36.732 EAL: Detected shared linkage of DPDK 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:36.732 EAL: Registered [vdev] bus. 00:05:36.732 EAL: bus.vdev log level changed from disabled to notice 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:36.732 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:36.732 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:36.732 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:36.993 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.993 EAL: No shared files mode enabled, IPC is disabled 00:05:36.993 EAL: Selected IOVA mode 'PA' 00:05:36.993 EAL: Probing VFIO support... 00:05:36.993 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.993 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:36.993 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.993 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.993 EAL: Setting up physically contiguous memory... 00:05:36.993 EAL: Setting maximum number of open files to 524288 00:05:36.993 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.993 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.993 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.993 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.993 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.993 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.993 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.993 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.993 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.993 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.993 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.993 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.994 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.994 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.994 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.994 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.994 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.994 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.994 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.994 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.994 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.994 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.994 EAL: Hugepages will be freed exactly as allocated. 00:05:36.994 EAL: No shared files mode enabled, IPC is disabled 00:05:36.994 EAL: No shared files mode enabled, IPC is disabled 00:05:36.994 EAL: TSC frequency is ~2600000 KHz 00:05:36.994 EAL: Main lcore 0 is ready (tid=7fb8387f9a40;cpuset=[0]) 00:05:36.994 EAL: Trying to obtain current memory policy. 00:05:36.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.994 EAL: Restoring previous memory policy: 0 00:05:36.994 EAL: request: mp_malloc_sync 00:05:36.994 EAL: No shared files mode enabled, IPC is disabled 00:05:36.994 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.994 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.994 EAL: No shared files mode enabled, IPC is disabled 00:05:36.994 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.994 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.994 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:36.994 00:05:36.994 00:05:36.994 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.994 http://cunit.sourceforge.net/ 00:05:36.994 00:05:36.994 00:05:36.994 Suite: components_suite 00:05:37.574 Test: vtophys_malloc_test ...passed 00:05:37.574 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.574 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.574 EAL: Restoring previous memory policy: 4 00:05:37.574 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.574 EAL: request: mp_malloc_sync 00:05:37.574 EAL: No shared files mode enabled, IPC is disabled 00:05:37.574 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.574 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.574 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.575 EAL: Trying to obtain current memory policy. 00:05:37.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.575 EAL: Restoring previous memory policy: 4 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.575 EAL: Trying to obtain current memory policy. 00:05:37.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.575 EAL: Restoring previous memory policy: 4 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.575 EAL: Trying to obtain current memory policy. 00:05:37.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.575 EAL: Restoring previous memory policy: 4 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.575 EAL: Trying to obtain current memory policy. 00:05:37.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.575 EAL: Restoring previous memory policy: 4 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.575 EAL: Trying to obtain current memory policy. 00:05:37.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.575 EAL: Restoring previous memory policy: 4 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.575 EAL: Trying to obtain current memory policy. 00:05:37.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.575 EAL: Restoring previous memory policy: 4 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.575 EAL: Trying to obtain current memory policy. 00:05:37.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.575 EAL: Restoring previous memory policy: 4 00:05:37.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.575 EAL: request: mp_malloc_sync 00:05:37.575 EAL: No shared files mode enabled, IPC is disabled 00:05:37.575 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.837 EAL: request: mp_malloc_sync 00:05:37.837 EAL: No shared files mode enabled, IPC is disabled 00:05:37.837 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.837 EAL: Trying to obtain current memory policy. 00:05:37.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.837 EAL: Restoring previous memory policy: 4 00:05:37.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.837 EAL: request: mp_malloc_sync 00:05:37.837 EAL: No shared files mode enabled, IPC is disabled 00:05:37.837 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.099 EAL: request: mp_malloc_sync 00:05:38.099 EAL: No shared files mode enabled, IPC is disabled 00:05:38.099 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.099 EAL: Trying to obtain current memory policy. 00:05:38.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.753 EAL: Restoring previous memory policy: 4 00:05:38.753 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.753 EAL: request: mp_malloc_sync 00:05:38.753 EAL: No shared files mode enabled, IPC is disabled 00:05:38.753 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.753 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.011 passed 00:05:39.011 00:05:39.011 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.011 suites 1 1 n/a 0 0 00:05:39.011 tests 2 2 2 0 0 00:05:39.011 asserts 5932 5932 5932 0 n/a 00:05:39.011 00:05:39.011 Elapsed time = 1.890 seconds 00:05:39.011 EAL: request: mp_malloc_sync 00:05:39.011 EAL: No shared files mode enabled, IPC is disabled 00:05:39.011 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.011 EAL: request: mp_malloc_sync 00:05:39.011 EAL: No shared files mode enabled, IPC is disabled 00:05:39.011 EAL: Heap on socket 0 was shrunk by 2MB 00:05:39.011 EAL: No shared files mode enabled, IPC is disabled 00:05:39.011 EAL: No shared files mode enabled, IPC is disabled 00:05:39.011 EAL: No shared files mode enabled, IPC is disabled 00:05:39.011 00:05:39.011 real 0m2.142s 00:05:39.011 user 0m0.908s 00:05:39.011 sys 0m1.084s 00:05:39.011 00:37:30 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.011 ************************************ 00:05:39.011 00:37:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 END TEST env_vtophys 00:05:39.011 ************************************ 00:05:39.011 00:37:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.011 00:37:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.011 00:37:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.011 00:37:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 ************************************ 00:05:39.011 START TEST env_pci 00:05:39.011 ************************************ 00:05:39.011 00:37:30 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.011 00:05:39.011 00:05:39.011 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.011 http://cunit.sourceforge.net/ 00:05:39.011 00:05:39.011 00:05:39.011 Suite: pci 00:05:39.011 Test: pci_hook ...[2024-11-17 00:37:30.960298] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69716 has claimed it 00:05:39.011 passed 00:05:39.011 00:05:39.011 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.011 suites 1 1 n/a 0 0 00:05:39.011 tests 1 1 1 0 0 00:05:39.011 asserts 25 25 25 0 n/a 00:05:39.011 00:05:39.011 Elapsed time = 0.004 seconds 00:05:39.011 EAL: Cannot find device (10000:00:01.0) 00:05:39.011 EAL: Failed to attach device on primary process 00:05:39.011 00:05:39.011 real 0m0.051s 00:05:39.011 user 0m0.019s 00:05:39.011 sys 0m0.031s 00:05:39.011 00:37:30 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.011 ************************************ 00:05:39.011 END TEST env_pci 00:05:39.011 ************************************ 00:05:39.011 00:37:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 00:37:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.011 00:37:31 env -- env/env.sh@15 -- # uname 00:05:39.011 00:37:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.011 00:37:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.011 00:37:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.011 00:37:31 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:39.011 00:37:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.011 00:37:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 ************************************ 00:05:39.011 START TEST env_dpdk_post_init 00:05:39.011 ************************************ 00:05:39.011 00:37:31 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.270 EAL: Detected CPU lcores: 10 00:05:39.270 EAL: Detected NUMA nodes: 1 00:05:39.270 EAL: Detected shared linkage of DPDK 00:05:39.270 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.270 EAL: Selected IOVA mode 'PA' 00:05:39.270 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.270 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:39.270 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:39.270 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:39.270 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:39.270 Starting DPDK initialization... 00:05:39.270 Starting SPDK post initialization... 00:05:39.270 SPDK NVMe probe 00:05:39.270 Attaching to 0000:00:10.0 00:05:39.270 Attaching to 0000:00:11.0 00:05:39.270 Attaching to 0000:00:12.0 00:05:39.270 Attaching to 0000:00:13.0 00:05:39.270 Attached to 0000:00:13.0 00:05:39.270 Attached to 0000:00:10.0 00:05:39.270 Attached to 0000:00:11.0 00:05:39.270 Attached to 0000:00:12.0 00:05:39.270 Cleaning up... 00:05:39.270 00:05:39.270 real 0m0.224s 00:05:39.270 user 0m0.057s 00:05:39.270 sys 0m0.068s 00:05:39.270 00:37:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.270 ************************************ 00:05:39.270 00:37:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.270 END TEST env_dpdk_post_init 00:05:39.270 ************************************ 00:05:39.270 00:37:31 env -- env/env.sh@26 -- # uname 00:05:39.270 00:37:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.270 00:37:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.270 00:37:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.270 00:37:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.270 00:37:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.270 ************************************ 00:05:39.270 START TEST env_mem_callbacks 00:05:39.270 ************************************ 00:05:39.270 00:37:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.529 EAL: Detected CPU lcores: 10 00:05:39.529 EAL: Detected NUMA nodes: 1 00:05:39.529 EAL: Detected shared linkage of DPDK 00:05:39.529 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.529 EAL: Selected IOVA mode 'PA' 00:05:39.529 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.529 00:05:39.529 00:05:39.529 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.529 http://cunit.sourceforge.net/ 00:05:39.529 00:05:39.529 00:05:39.529 Suite: memory 00:05:39.529 Test: test ... 00:05:39.529 register 0x200000200000 2097152 00:05:39.529 malloc 3145728 00:05:39.529 register 0x200000400000 4194304 00:05:39.529 buf 0x200000500000 len 3145728 PASSED 00:05:39.529 malloc 64 00:05:39.529 buf 0x2000004fff40 len 64 PASSED 00:05:39.529 malloc 4194304 00:05:39.529 register 0x200000800000 6291456 00:05:39.529 buf 0x200000a00000 len 4194304 PASSED 00:05:39.529 free 0x200000500000 3145728 00:05:39.529 free 0x2000004fff40 64 00:05:39.529 unregister 0x200000400000 4194304 PASSED 00:05:39.529 free 0x200000a00000 4194304 00:05:39.529 unregister 0x200000800000 6291456 PASSED 00:05:39.529 malloc 8388608 00:05:39.529 register 0x200000400000 10485760 00:05:39.529 buf 0x200000600000 len 8388608 PASSED 00:05:39.529 free 0x200000600000 8388608 00:05:39.529 unregister 0x200000400000 10485760 PASSED 00:05:39.529 passed 00:05:39.529 00:05:39.529 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.529 suites 1 1 n/a 0 0 00:05:39.529 tests 1 1 1 0 0 00:05:39.529 asserts 15 15 15 0 n/a 00:05:39.529 00:05:39.529 Elapsed time = 0.010 seconds 00:05:39.529 00:05:39.529 real 0m0.175s 00:05:39.529 user 0m0.026s 00:05:39.529 sys 0m0.046s 00:05:39.529 00:37:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.529 ************************************ 00:05:39.529 END TEST env_mem_callbacks 00:05:39.529 ************************************ 00:05:39.529 00:37:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:39.529 00:05:39.529 real 0m3.318s 00:05:39.529 user 0m1.420s 00:05:39.529 sys 0m1.477s 00:05:39.529 00:37:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.529 00:37:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.529 ************************************ 00:05:39.529 END TEST env 00:05:39.529 ************************************ 00:05:39.529 00:37:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.529 00:37:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.529 00:37:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.529 00:37:31 -- common/autotest_common.sh@10 -- # set +x 00:05:39.529 ************************************ 00:05:39.529 START TEST rpc 00:05:39.529 ************************************ 00:05:39.529 00:37:31 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.788 * Looking for test storage... 00:05:39.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.788 00:37:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.788 00:37:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.788 00:37:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.788 00:37:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.788 00:37:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.788 00:37:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.788 00:37:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.788 00:37:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.788 00:37:31 rpc -- scripts/common.sh@345 -- # : 1 00:05:39.788 00:37:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.788 00:37:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.788 00:37:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.788 00:37:31 rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.788 00:37:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.788 00:37:31 rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.788 00:37:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.788 00:37:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.788 00:37:31 rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.788 00:37:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.788 00:37:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.788 00:37:31 rpc -- scripts/common.sh@368 -- # return 0 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.788 --rc genhtml_branch_coverage=1 00:05:39.788 --rc genhtml_function_coverage=1 00:05:39.788 --rc genhtml_legend=1 00:05:39.788 --rc geninfo_all_blocks=1 00:05:39.788 --rc geninfo_unexecuted_blocks=1 00:05:39.788 00:05:39.788 ' 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.788 --rc genhtml_branch_coverage=1 00:05:39.788 --rc genhtml_function_coverage=1 00:05:39.788 --rc genhtml_legend=1 00:05:39.788 --rc geninfo_all_blocks=1 00:05:39.788 --rc geninfo_unexecuted_blocks=1 00:05:39.788 00:05:39.788 ' 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.788 --rc genhtml_branch_coverage=1 00:05:39.788 --rc genhtml_function_coverage=1 00:05:39.788 --rc genhtml_legend=1 00:05:39.788 --rc geninfo_all_blocks=1 00:05:39.788 --rc geninfo_unexecuted_blocks=1 00:05:39.788 00:05:39.788 ' 00:05:39.788 00:37:31 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.788 --rc genhtml_branch_coverage=1 00:05:39.788 --rc genhtml_function_coverage=1 00:05:39.788 --rc genhtml_legend=1 00:05:39.788 --rc geninfo_all_blocks=1 00:05:39.788 --rc geninfo_unexecuted_blocks=1 00:05:39.788 00:05:39.788 ' 00:05:39.788 00:37:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69843 00:05:39.789 00:37:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.789 00:37:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69843 00:05:39.789 00:37:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 69843 ']' 00:05:39.789 00:37:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.789 00:37:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.789 00:37:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.789 00:37:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.789 00:37:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.789 00:37:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.789 [2024-11-17 00:37:31.807221] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:39.789 [2024-11-17 00:37:31.807373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69843 ] 00:05:40.047 [2024-11-17 00:37:31.954254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.047 [2024-11-17 00:37:32.000246] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.047 [2024-11-17 00:37:32.000313] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69843' to capture a snapshot of events at runtime. 00:05:40.047 [2024-11-17 00:37:32.000330] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.047 [2024-11-17 00:37:32.000342] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.047 [2024-11-17 00:37:32.000370] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69843 for offline analysis/debug. 00:05:40.047 [2024-11-17 00:37:32.000412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.615 00:37:32 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.615 00:37:32 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:40.615 00:37:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.615 00:37:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.615 00:37:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:40.615 00:37:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:40.615 00:37:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.615 00:37:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.615 00:37:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.615 ************************************ 00:05:40.615 START TEST rpc_integrity 00:05:40.615 ************************************ 00:05:40.615 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:40.615 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.615 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.615 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.615 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.615 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.615 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.875 { 00:05:40.875 "name": "Malloc0", 00:05:40.875 "aliases": [ 00:05:40.875 "ebdad77b-df47-403f-a3cc-a5e8b79688d2" 00:05:40.875 ], 00:05:40.875 "product_name": "Malloc disk", 00:05:40.875 "block_size": 512, 00:05:40.875 "num_blocks": 16384, 00:05:40.875 "uuid": "ebdad77b-df47-403f-a3cc-a5e8b79688d2", 00:05:40.875 "assigned_rate_limits": { 00:05:40.875 "rw_ios_per_sec": 0, 00:05:40.875 "rw_mbytes_per_sec": 0, 00:05:40.875 "r_mbytes_per_sec": 0, 00:05:40.875 "w_mbytes_per_sec": 0 00:05:40.875 }, 00:05:40.875 "claimed": false, 00:05:40.875 "zoned": false, 00:05:40.875 "supported_io_types": { 00:05:40.875 "read": true, 00:05:40.875 "write": true, 00:05:40.875 "unmap": true, 00:05:40.875 "flush": true, 00:05:40.875 "reset": true, 00:05:40.875 "nvme_admin": false, 00:05:40.875 "nvme_io": false, 00:05:40.875 "nvme_io_md": false, 00:05:40.875 "write_zeroes": true, 00:05:40.875 "zcopy": true, 00:05:40.875 "get_zone_info": false, 00:05:40.875 "zone_management": false, 00:05:40.875 "zone_append": false, 00:05:40.875 "compare": false, 00:05:40.875 "compare_and_write": false, 00:05:40.875 "abort": true, 00:05:40.875 "seek_hole": false, 00:05:40.875 "seek_data": false, 00:05:40.875 "copy": true, 00:05:40.875 "nvme_iov_md": false 00:05:40.875 }, 00:05:40.875 "memory_domains": [ 00:05:40.875 { 00:05:40.875 "dma_device_id": "system", 00:05:40.875 "dma_device_type": 1 00:05:40.875 }, 00:05:40.875 { 00:05:40.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.875 "dma_device_type": 2 00:05:40.875 } 00:05:40.875 ], 00:05:40.875 "driver_specific": {} 00:05:40.875 } 00:05:40.875 ]' 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.875 [2024-11-17 00:37:32.759211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:40.875 [2024-11-17 00:37:32.759277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.875 [2024-11-17 00:37:32.759304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:40.875 [2024-11-17 00:37:32.759314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.875 [2024-11-17 00:37:32.761683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.875 [2024-11-17 00:37:32.761721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.875 Passthru0 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.875 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.875 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.875 { 00:05:40.875 "name": "Malloc0", 00:05:40.875 "aliases": [ 00:05:40.875 "ebdad77b-df47-403f-a3cc-a5e8b79688d2" 00:05:40.875 ], 00:05:40.875 "product_name": "Malloc disk", 00:05:40.875 "block_size": 512, 00:05:40.875 "num_blocks": 16384, 00:05:40.875 "uuid": "ebdad77b-df47-403f-a3cc-a5e8b79688d2", 00:05:40.875 "assigned_rate_limits": { 00:05:40.875 "rw_ios_per_sec": 0, 00:05:40.875 "rw_mbytes_per_sec": 0, 00:05:40.875 "r_mbytes_per_sec": 0, 00:05:40.875 "w_mbytes_per_sec": 0 00:05:40.875 }, 00:05:40.875 "claimed": true, 00:05:40.875 "claim_type": "exclusive_write", 00:05:40.875 "zoned": false, 00:05:40.875 "supported_io_types": { 00:05:40.875 "read": true, 00:05:40.875 "write": true, 00:05:40.875 "unmap": true, 00:05:40.875 "flush": true, 00:05:40.875 "reset": true, 00:05:40.875 "nvme_admin": false, 00:05:40.875 "nvme_io": false, 00:05:40.875 "nvme_io_md": false, 00:05:40.875 "write_zeroes": true, 00:05:40.875 "zcopy": true, 00:05:40.875 "get_zone_info": false, 00:05:40.875 "zone_management": false, 00:05:40.875 "zone_append": false, 00:05:40.875 "compare": false, 00:05:40.875 "compare_and_write": false, 00:05:40.875 "abort": true, 00:05:40.875 "seek_hole": false, 00:05:40.875 "seek_data": false, 00:05:40.876 "copy": true, 00:05:40.876 "nvme_iov_md": false 00:05:40.876 }, 00:05:40.876 "memory_domains": [ 00:05:40.876 { 00:05:40.876 "dma_device_id": "system", 00:05:40.876 "dma_device_type": 1 00:05:40.876 }, 00:05:40.876 { 00:05:40.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.876 "dma_device_type": 2 00:05:40.876 } 00:05:40.876 ], 00:05:40.876 "driver_specific": {} 00:05:40.876 }, 00:05:40.876 { 00:05:40.876 "name": "Passthru0", 00:05:40.876 "aliases": [ 00:05:40.876 "49b77ef1-8a17-5366-a9e0-4cf0ac6669b3" 00:05:40.876 ], 00:05:40.876 "product_name": "passthru", 00:05:40.876 "block_size": 512, 00:05:40.876 "num_blocks": 16384, 00:05:40.876 "uuid": "49b77ef1-8a17-5366-a9e0-4cf0ac6669b3", 00:05:40.876 "assigned_rate_limits": { 00:05:40.876 "rw_ios_per_sec": 0, 00:05:40.876 "rw_mbytes_per_sec": 0, 00:05:40.876 "r_mbytes_per_sec": 0, 00:05:40.876 "w_mbytes_per_sec": 0 00:05:40.876 }, 00:05:40.876 "claimed": false, 00:05:40.876 "zoned": false, 00:05:40.876 "supported_io_types": { 00:05:40.876 "read": true, 00:05:40.876 "write": true, 00:05:40.876 "unmap": true, 00:05:40.876 "flush": true, 00:05:40.876 "reset": true, 00:05:40.876 "nvme_admin": false, 00:05:40.876 "nvme_io": false, 00:05:40.876 "nvme_io_md": false, 00:05:40.876 "write_zeroes": true, 00:05:40.876 "zcopy": true, 00:05:40.876 "get_zone_info": false, 00:05:40.876 "zone_management": false, 00:05:40.876 "zone_append": false, 00:05:40.876 "compare": false, 00:05:40.876 "compare_and_write": false, 00:05:40.876 "abort": true, 00:05:40.876 "seek_hole": false, 00:05:40.876 "seek_data": false, 00:05:40.876 "copy": true, 00:05:40.876 "nvme_iov_md": false 00:05:40.876 }, 00:05:40.876 "memory_domains": [ 00:05:40.876 { 00:05:40.876 "dma_device_id": "system", 00:05:40.876 "dma_device_type": 1 00:05:40.876 }, 00:05:40.876 { 00:05:40.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.876 "dma_device_type": 2 00:05:40.876 } 00:05:40.876 ], 00:05:40.876 "driver_specific": { 00:05:40.876 "passthru": { 00:05:40.876 "name": "Passthru0", 00:05:40.876 "base_bdev_name": "Malloc0" 00:05:40.876 } 00:05:40.876 } 00:05:40.876 } 00:05:40.876 ]' 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.876 00:37:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.876 00:05:40.876 real 0m0.224s 00:05:40.876 user 0m0.120s 00:05:40.876 sys 0m0.039s 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.876 00:37:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.876 ************************************ 00:05:40.876 END TEST rpc_integrity 00:05:40.876 ************************************ 00:05:40.876 00:37:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:40.876 00:37:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.876 00:37:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.876 00:37:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.876 ************************************ 00:05:40.876 START TEST rpc_plugins 00:05:40.876 ************************************ 00:05:40.876 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:40.876 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:40.876 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.876 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.876 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.876 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:40.876 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:40.876 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.876 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.137 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.137 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.137 { 00:05:41.137 "name": "Malloc1", 00:05:41.137 "aliases": [ 00:05:41.137 "dbbc0e86-da64-4c7e-81ab-b741d56e027b" 00:05:41.137 ], 00:05:41.137 "product_name": "Malloc disk", 00:05:41.137 "block_size": 4096, 00:05:41.137 "num_blocks": 256, 00:05:41.137 "uuid": "dbbc0e86-da64-4c7e-81ab-b741d56e027b", 00:05:41.137 "assigned_rate_limits": { 00:05:41.137 "rw_ios_per_sec": 0, 00:05:41.137 "rw_mbytes_per_sec": 0, 00:05:41.137 "r_mbytes_per_sec": 0, 00:05:41.137 "w_mbytes_per_sec": 0 00:05:41.137 }, 00:05:41.137 "claimed": false, 00:05:41.137 "zoned": false, 00:05:41.137 "supported_io_types": { 00:05:41.137 "read": true, 00:05:41.137 "write": true, 00:05:41.137 "unmap": true, 00:05:41.137 "flush": true, 00:05:41.137 "reset": true, 00:05:41.137 "nvme_admin": false, 00:05:41.137 "nvme_io": false, 00:05:41.137 "nvme_io_md": false, 00:05:41.137 "write_zeroes": true, 00:05:41.137 "zcopy": true, 00:05:41.137 "get_zone_info": false, 00:05:41.137 "zone_management": false, 00:05:41.137 "zone_append": false, 00:05:41.137 "compare": false, 00:05:41.137 "compare_and_write": false, 00:05:41.137 "abort": true, 00:05:41.137 "seek_hole": false, 00:05:41.137 "seek_data": false, 00:05:41.137 "copy": true, 00:05:41.137 "nvme_iov_md": false 00:05:41.137 }, 00:05:41.137 "memory_domains": [ 00:05:41.137 { 00:05:41.137 "dma_device_id": "system", 00:05:41.137 "dma_device_type": 1 00:05:41.137 }, 00:05:41.137 { 00:05:41.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.137 "dma_device_type": 2 00:05:41.137 } 00:05:41.137 ], 00:05:41.137 "driver_specific": {} 00:05:41.137 } 00:05:41.137 ]' 00:05:41.137 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.137 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.137 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.137 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.137 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.137 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.137 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.137 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.137 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.137 00:37:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.137 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.137 00:37:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.137 00:37:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.137 00:05:41.137 real 0m0.117s 00:05:41.137 user 0m0.066s 00:05:41.137 sys 0m0.015s 00:05:41.137 00:37:33 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.137 ************************************ 00:05:41.137 END TEST rpc_plugins 00:05:41.138 ************************************ 00:05:41.138 00:37:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 00:37:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.138 00:37:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.138 00:37:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.138 00:37:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 ************************************ 00:05:41.138 START TEST rpc_trace_cmd_test 00:05:41.138 ************************************ 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:41.138 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69843", 00:05:41.138 "tpoint_group_mask": "0x8", 00:05:41.138 "iscsi_conn": { 00:05:41.138 "mask": "0x2", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "scsi": { 00:05:41.138 "mask": "0x4", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "bdev": { 00:05:41.138 "mask": "0x8", 00:05:41.138 "tpoint_mask": "0xffffffffffffffff" 00:05:41.138 }, 00:05:41.138 "nvmf_rdma": { 00:05:41.138 "mask": "0x10", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "nvmf_tcp": { 00:05:41.138 "mask": "0x20", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "ftl": { 00:05:41.138 "mask": "0x40", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "blobfs": { 00:05:41.138 "mask": "0x80", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "dsa": { 00:05:41.138 "mask": "0x200", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "thread": { 00:05:41.138 "mask": "0x400", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "nvme_pcie": { 00:05:41.138 "mask": "0x800", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "iaa": { 00:05:41.138 "mask": "0x1000", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "nvme_tcp": { 00:05:41.138 "mask": "0x2000", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "bdev_nvme": { 00:05:41.138 "mask": "0x4000", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "sock": { 00:05:41.138 "mask": "0x8000", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "blob": { 00:05:41.138 "mask": "0x10000", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 }, 00:05:41.138 "bdev_raid": { 00:05:41.138 "mask": "0x20000", 00:05:41.138 "tpoint_mask": "0x0" 00:05:41.138 } 00:05:41.138 }' 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.138 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.399 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.399 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:41.399 00:37:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:41.399 00:05:41.399 real 0m0.176s 00:05:41.399 user 0m0.142s 00:05:41.399 sys 0m0.022s 00:05:41.399 00:37:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.399 00:37:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 ************************************ 00:05:41.399 END TEST rpc_trace_cmd_test 00:05:41.399 ************************************ 00:05:41.399 00:37:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:41.399 00:37:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:41.399 00:37:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:41.399 00:37:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.399 00:37:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.399 00:37:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 ************************************ 00:05:41.399 START TEST rpc_daemon_integrity 00:05:41.399 ************************************ 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.399 { 00:05:41.399 "name": "Malloc2", 00:05:41.399 "aliases": [ 00:05:41.399 "9a66926e-cbf4-413f-9ed0-911e30002ea5" 00:05:41.399 ], 00:05:41.399 "product_name": "Malloc disk", 00:05:41.399 "block_size": 512, 00:05:41.399 "num_blocks": 16384, 00:05:41.399 "uuid": "9a66926e-cbf4-413f-9ed0-911e30002ea5", 00:05:41.399 "assigned_rate_limits": { 00:05:41.399 "rw_ios_per_sec": 0, 00:05:41.399 "rw_mbytes_per_sec": 0, 00:05:41.399 "r_mbytes_per_sec": 0, 00:05:41.399 "w_mbytes_per_sec": 0 00:05:41.399 }, 00:05:41.399 "claimed": false, 00:05:41.399 "zoned": false, 00:05:41.399 "supported_io_types": { 00:05:41.399 "read": true, 00:05:41.399 "write": true, 00:05:41.399 "unmap": true, 00:05:41.399 "flush": true, 00:05:41.399 "reset": true, 00:05:41.399 "nvme_admin": false, 00:05:41.399 "nvme_io": false, 00:05:41.399 "nvme_io_md": false, 00:05:41.399 "write_zeroes": true, 00:05:41.399 "zcopy": true, 00:05:41.399 "get_zone_info": false, 00:05:41.399 "zone_management": false, 00:05:41.399 "zone_append": false, 00:05:41.399 "compare": false, 00:05:41.399 "compare_and_write": false, 00:05:41.399 "abort": true, 00:05:41.399 "seek_hole": false, 00:05:41.399 "seek_data": false, 00:05:41.399 "copy": true, 00:05:41.399 "nvme_iov_md": false 00:05:41.399 }, 00:05:41.399 "memory_domains": [ 00:05:41.399 { 00:05:41.399 "dma_device_id": "system", 00:05:41.399 "dma_device_type": 1 00:05:41.399 }, 00:05:41.399 { 00:05:41.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.399 "dma_device_type": 2 00:05:41.399 } 00:05:41.399 ], 00:05:41.399 "driver_specific": {} 00:05:41.399 } 00:05:41.399 ]' 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 [2024-11-17 00:37:33.411525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:41.399 [2024-11-17 00:37:33.411585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.399 [2024-11-17 00:37:33.411606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:41.399 [2024-11-17 00:37:33.411616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.399 [2024-11-17 00:37:33.413886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.399 [2024-11-17 00:37:33.413922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.399 Passthru0 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.399 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.399 { 00:05:41.399 "name": "Malloc2", 00:05:41.399 "aliases": [ 00:05:41.399 "9a66926e-cbf4-413f-9ed0-911e30002ea5" 00:05:41.399 ], 00:05:41.399 "product_name": "Malloc disk", 00:05:41.399 "block_size": 512, 00:05:41.399 "num_blocks": 16384, 00:05:41.399 "uuid": "9a66926e-cbf4-413f-9ed0-911e30002ea5", 00:05:41.399 "assigned_rate_limits": { 00:05:41.399 "rw_ios_per_sec": 0, 00:05:41.399 "rw_mbytes_per_sec": 0, 00:05:41.399 "r_mbytes_per_sec": 0, 00:05:41.399 "w_mbytes_per_sec": 0 00:05:41.399 }, 00:05:41.399 "claimed": true, 00:05:41.399 "claim_type": "exclusive_write", 00:05:41.399 "zoned": false, 00:05:41.399 "supported_io_types": { 00:05:41.399 "read": true, 00:05:41.399 "write": true, 00:05:41.399 "unmap": true, 00:05:41.399 "flush": true, 00:05:41.399 "reset": true, 00:05:41.399 "nvme_admin": false, 00:05:41.399 "nvme_io": false, 00:05:41.399 "nvme_io_md": false, 00:05:41.399 "write_zeroes": true, 00:05:41.399 "zcopy": true, 00:05:41.399 "get_zone_info": false, 00:05:41.399 "zone_management": false, 00:05:41.399 "zone_append": false, 00:05:41.399 "compare": false, 00:05:41.399 "compare_and_write": false, 00:05:41.399 "abort": true, 00:05:41.399 "seek_hole": false, 00:05:41.399 "seek_data": false, 00:05:41.399 "copy": true, 00:05:41.399 "nvme_iov_md": false 00:05:41.399 }, 00:05:41.399 "memory_domains": [ 00:05:41.399 { 00:05:41.399 "dma_device_id": "system", 00:05:41.399 "dma_device_type": 1 00:05:41.399 }, 00:05:41.399 { 00:05:41.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.399 "dma_device_type": 2 00:05:41.399 } 00:05:41.399 ], 00:05:41.399 "driver_specific": {} 00:05:41.399 }, 00:05:41.399 { 00:05:41.399 "name": "Passthru0", 00:05:41.399 "aliases": [ 00:05:41.399 "4bd631a9-5f54-54c0-b5e3-f4d53198c364" 00:05:41.399 ], 00:05:41.399 "product_name": "passthru", 00:05:41.399 "block_size": 512, 00:05:41.399 "num_blocks": 16384, 00:05:41.399 "uuid": "4bd631a9-5f54-54c0-b5e3-f4d53198c364", 00:05:41.399 "assigned_rate_limits": { 00:05:41.399 "rw_ios_per_sec": 0, 00:05:41.399 "rw_mbytes_per_sec": 0, 00:05:41.399 "r_mbytes_per_sec": 0, 00:05:41.399 "w_mbytes_per_sec": 0 00:05:41.399 }, 00:05:41.399 "claimed": false, 00:05:41.399 "zoned": false, 00:05:41.399 "supported_io_types": { 00:05:41.399 "read": true, 00:05:41.399 "write": true, 00:05:41.399 "unmap": true, 00:05:41.399 "flush": true, 00:05:41.399 "reset": true, 00:05:41.399 "nvme_admin": false, 00:05:41.399 "nvme_io": false, 00:05:41.399 "nvme_io_md": false, 00:05:41.399 "write_zeroes": true, 00:05:41.399 "zcopy": true, 00:05:41.399 "get_zone_info": false, 00:05:41.399 "zone_management": false, 00:05:41.399 "zone_append": false, 00:05:41.399 "compare": false, 00:05:41.399 "compare_and_write": false, 00:05:41.399 "abort": true, 00:05:41.399 "seek_hole": false, 00:05:41.399 "seek_data": false, 00:05:41.399 "copy": true, 00:05:41.399 "nvme_iov_md": false 00:05:41.399 }, 00:05:41.399 "memory_domains": [ 00:05:41.399 { 00:05:41.399 "dma_device_id": "system", 00:05:41.399 "dma_device_type": 1 00:05:41.399 }, 00:05:41.399 { 00:05:41.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.399 "dma_device_type": 2 00:05:41.399 } 00:05:41.399 ], 00:05:41.399 "driver_specific": { 00:05:41.399 "passthru": { 00:05:41.399 "name": "Passthru0", 00:05:41.400 "base_bdev_name": "Malloc2" 00:05:41.400 } 00:05:41.400 } 00:05:41.400 } 00:05:41.400 ]' 00:05:41.400 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.661 00:05:41.661 real 0m0.215s 00:05:41.661 user 0m0.120s 00:05:41.661 sys 0m0.031s 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.661 ************************************ 00:05:41.661 END TEST rpc_daemon_integrity 00:05:41.661 ************************************ 00:05:41.661 00:37:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.661 00:37:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:41.661 00:37:33 rpc -- rpc/rpc.sh@84 -- # killprocess 69843 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@950 -- # '[' -z 69843 ']' 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@954 -- # kill -0 69843 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@955 -- # uname 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69843 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.661 killing process with pid 69843 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69843' 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@969 -- # kill 69843 00:05:41.661 00:37:33 rpc -- common/autotest_common.sh@974 -- # wait 69843 00:05:42.233 00:05:42.233 real 0m2.399s 00:05:42.233 user 0m2.753s 00:05:42.233 sys 0m0.651s 00:05:42.233 ************************************ 00:05:42.233 00:37:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.233 00:37:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.233 END TEST rpc 00:05:42.233 ************************************ 00:05:42.233 00:37:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.233 00:37:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.233 00:37:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.233 00:37:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.233 ************************************ 00:05:42.233 START TEST skip_rpc 00:05:42.233 ************************************ 00:05:42.233 00:37:34 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.233 * Looking for test storage... 00:05:42.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.233 00:37:34 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.233 00:37:34 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.233 00:37:34 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.233 00:37:34 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.233 00:37:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:42.233 00:37:34 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.233 00:37:34 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.234 --rc genhtml_branch_coverage=1 00:05:42.234 --rc genhtml_function_coverage=1 00:05:42.234 --rc genhtml_legend=1 00:05:42.234 --rc geninfo_all_blocks=1 00:05:42.234 --rc geninfo_unexecuted_blocks=1 00:05:42.234 00:05:42.234 ' 00:05:42.234 00:37:34 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.234 --rc genhtml_branch_coverage=1 00:05:42.234 --rc genhtml_function_coverage=1 00:05:42.234 --rc genhtml_legend=1 00:05:42.234 --rc geninfo_all_blocks=1 00:05:42.234 --rc geninfo_unexecuted_blocks=1 00:05:42.234 00:05:42.234 ' 00:05:42.234 00:37:34 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.234 --rc genhtml_branch_coverage=1 00:05:42.234 --rc genhtml_function_coverage=1 00:05:42.234 --rc genhtml_legend=1 00:05:42.234 --rc geninfo_all_blocks=1 00:05:42.234 --rc geninfo_unexecuted_blocks=1 00:05:42.234 00:05:42.234 ' 00:05:42.234 00:37:34 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.234 --rc genhtml_branch_coverage=1 00:05:42.234 --rc genhtml_function_coverage=1 00:05:42.234 --rc genhtml_legend=1 00:05:42.234 --rc geninfo_all_blocks=1 00:05:42.234 --rc geninfo_unexecuted_blocks=1 00:05:42.234 00:05:42.234 ' 00:05:42.234 00:37:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.234 00:37:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:42.234 00:37:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.234 00:37:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.234 00:37:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.234 00:37:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.234 ************************************ 00:05:42.234 START TEST skip_rpc 00:05:42.234 ************************************ 00:05:42.234 00:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:42.234 00:37:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70039 00:05:42.234 00:37:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.234 00:37:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:42.234 00:37:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:42.493 [2024-11-17 00:37:34.315657] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:42.493 [2024-11-17 00:37:34.316394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70039 ] 00:05:42.493 [2024-11-17 00:37:34.474093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.493 [2024-11-17 00:37:34.543653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70039 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70039 ']' 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70039 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70039 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.782 killing process with pid 70039 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70039' 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70039 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70039 00:05:47.782 00:05:47.782 real 0m5.351s 00:05:47.782 user 0m4.845s 00:05:47.782 sys 0m0.395s 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.782 ************************************ 00:05:47.782 END TEST skip_rpc 00:05:47.782 ************************************ 00:05:47.782 00:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.782 00:37:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.782 00:37:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.782 00:37:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.782 00:37:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.782 ************************************ 00:05:47.782 START TEST skip_rpc_with_json 00:05:47.782 ************************************ 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70127 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70127 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70127 ']' 00:05:47.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.782 00:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.782 [2024-11-17 00:37:39.710024] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.782 [2024-11-17 00:37:39.710164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70127 ] 00:05:48.043 [2024-11-17 00:37:39.854828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.043 [2024-11-17 00:37:39.927063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.614 [2024-11-17 00:37:40.545746] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:48.614 request: 00:05:48.614 { 00:05:48.614 "trtype": "tcp", 00:05:48.614 "method": "nvmf_get_transports", 00:05:48.614 "req_id": 1 00:05:48.614 } 00:05:48.614 Got JSON-RPC error response 00:05:48.614 response: 00:05:48.614 { 00:05:48.614 "code": -19, 00:05:48.614 "message": "No such device" 00:05:48.614 } 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.614 [2024-11-17 00:37:40.557840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.614 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.875 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.875 00:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.875 { 00:05:48.875 "subsystems": [ 00:05:48.875 { 00:05:48.875 "subsystem": "fsdev", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "fsdev_set_opts", 00:05:48.875 "params": { 00:05:48.875 "fsdev_io_pool_size": 65535, 00:05:48.875 "fsdev_io_cache_size": 256 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "keyring", 00:05:48.875 "config": [] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "iobuf", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "iobuf_set_options", 00:05:48.875 "params": { 00:05:48.875 "small_pool_count": 8192, 00:05:48.875 "large_pool_count": 1024, 00:05:48.875 "small_bufsize": 8192, 00:05:48.875 "large_bufsize": 135168 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "sock", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "sock_set_default_impl", 00:05:48.875 "params": { 00:05:48.875 "impl_name": "posix" 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "sock_impl_set_options", 00:05:48.875 "params": { 00:05:48.875 "impl_name": "ssl", 00:05:48.875 "recv_buf_size": 4096, 00:05:48.875 "send_buf_size": 4096, 00:05:48.875 "enable_recv_pipe": true, 00:05:48.875 "enable_quickack": false, 00:05:48.875 "enable_placement_id": 0, 00:05:48.875 "enable_zerocopy_send_server": true, 00:05:48.875 "enable_zerocopy_send_client": false, 00:05:48.875 "zerocopy_threshold": 0, 00:05:48.875 "tls_version": 0, 00:05:48.875 "enable_ktls": false 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "sock_impl_set_options", 00:05:48.875 "params": { 00:05:48.875 "impl_name": "posix", 00:05:48.875 "recv_buf_size": 2097152, 00:05:48.875 "send_buf_size": 2097152, 00:05:48.875 "enable_recv_pipe": true, 00:05:48.875 "enable_quickack": false, 00:05:48.875 "enable_placement_id": 0, 00:05:48.875 "enable_zerocopy_send_server": true, 00:05:48.875 "enable_zerocopy_send_client": false, 00:05:48.875 "zerocopy_threshold": 0, 00:05:48.875 "tls_version": 0, 00:05:48.875 "enable_ktls": false 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "vmd", 00:05:48.875 "config": [] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "accel", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "accel_set_options", 00:05:48.875 "params": { 00:05:48.875 "small_cache_size": 128, 00:05:48.875 "large_cache_size": 16, 00:05:48.875 "task_count": 2048, 00:05:48.875 "sequence_count": 2048, 00:05:48.875 "buf_count": 2048 00:05:48.875 } 00:05:48.875 } 00:05:48.875 ] 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "subsystem": "bdev", 00:05:48.875 "config": [ 00:05:48.875 { 00:05:48.875 "method": "bdev_set_options", 00:05:48.875 "params": { 00:05:48.875 "bdev_io_pool_size": 65535, 00:05:48.875 "bdev_io_cache_size": 256, 00:05:48.875 "bdev_auto_examine": true, 00:05:48.875 "iobuf_small_cache_size": 128, 00:05:48.875 "iobuf_large_cache_size": 16 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "bdev_raid_set_options", 00:05:48.875 "params": { 00:05:48.875 "process_window_size_kb": 1024, 00:05:48.875 "process_max_bandwidth_mb_sec": 0 00:05:48.875 } 00:05:48.875 }, 00:05:48.875 { 00:05:48.875 "method": "bdev_iscsi_set_options", 00:05:48.875 "params": { 00:05:48.876 "timeout_sec": 30 00:05:48.876 } 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "method": "bdev_nvme_set_options", 00:05:48.876 "params": { 00:05:48.876 "action_on_timeout": "none", 00:05:48.876 "timeout_us": 0, 00:05:48.876 "timeout_admin_us": 0, 00:05:48.876 "keep_alive_timeout_ms": 10000, 00:05:48.876 "arbitration_burst": 0, 00:05:48.876 "low_priority_weight": 0, 00:05:48.876 "medium_priority_weight": 0, 00:05:48.876 "high_priority_weight": 0, 00:05:48.876 "nvme_adminq_poll_period_us": 10000, 00:05:48.876 "nvme_ioq_poll_period_us": 0, 00:05:48.876 "io_queue_requests": 0, 00:05:48.876 "delay_cmd_submit": true, 00:05:48.876 "transport_retry_count": 4, 00:05:48.876 "bdev_retry_count": 3, 00:05:48.876 "transport_ack_timeout": 0, 00:05:48.876 "ctrlr_loss_timeout_sec": 0, 00:05:48.876 "reconnect_delay_sec": 0, 00:05:48.876 "fast_io_fail_timeout_sec": 0, 00:05:48.876 "disable_auto_failback": false, 00:05:48.876 "generate_uuids": false, 00:05:48.876 "transport_tos": 0, 00:05:48.876 "nvme_error_stat": false, 00:05:48.876 "rdma_srq_size": 0, 00:05:48.876 "io_path_stat": false, 00:05:48.876 "allow_accel_sequence": false, 00:05:48.876 "rdma_max_cq_size": 0, 00:05:48.876 "rdma_cm_event_timeout_ms": 0, 00:05:48.876 "dhchap_digests": [ 00:05:48.876 "sha256", 00:05:48.876 "sha384", 00:05:48.876 "sha512" 00:05:48.876 ], 00:05:48.876 "dhchap_dhgroups": [ 00:05:48.876 "null", 00:05:48.876 "ffdhe2048", 00:05:48.876 "ffdhe3072", 00:05:48.876 "ffdhe4096", 00:05:48.876 "ffdhe6144", 00:05:48.876 "ffdhe8192" 00:05:48.876 ] 00:05:48.876 } 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "method": "bdev_nvme_set_hotplug", 00:05:48.876 "params": { 00:05:48.876 "period_us": 100000, 00:05:48.876 "enable": false 00:05:48.876 } 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "method": "bdev_wait_for_examine" 00:05:48.876 } 00:05:48.876 ] 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "scsi", 00:05:48.876 "config": null 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "scheduler", 00:05:48.876 "config": [ 00:05:48.876 { 00:05:48.876 "method": "framework_set_scheduler", 00:05:48.876 "params": { 00:05:48.876 "name": "static" 00:05:48.876 } 00:05:48.876 } 00:05:48.876 ] 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "vhost_scsi", 00:05:48.876 "config": [] 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "vhost_blk", 00:05:48.876 "config": [] 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "ublk", 00:05:48.876 "config": [] 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "nbd", 00:05:48.876 "config": [] 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "nvmf", 00:05:48.876 "config": [ 00:05:48.876 { 00:05:48.876 "method": "nvmf_set_config", 00:05:48.876 "params": { 00:05:48.876 "discovery_filter": "match_any", 00:05:48.876 "admin_cmd_passthru": { 00:05:48.876 "identify_ctrlr": false 00:05:48.876 }, 00:05:48.876 "dhchap_digests": [ 00:05:48.876 "sha256", 00:05:48.876 "sha384", 00:05:48.876 "sha512" 00:05:48.876 ], 00:05:48.876 "dhchap_dhgroups": [ 00:05:48.876 "null", 00:05:48.876 "ffdhe2048", 00:05:48.876 "ffdhe3072", 00:05:48.876 "ffdhe4096", 00:05:48.876 "ffdhe6144", 00:05:48.876 "ffdhe8192" 00:05:48.876 ] 00:05:48.876 } 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "method": "nvmf_set_max_subsystems", 00:05:48.876 "params": { 00:05:48.876 "max_subsystems": 1024 00:05:48.876 } 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "method": "nvmf_set_crdt", 00:05:48.876 "params": { 00:05:48.876 "crdt1": 0, 00:05:48.876 "crdt2": 0, 00:05:48.876 "crdt3": 0 00:05:48.876 } 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "method": "nvmf_create_transport", 00:05:48.876 "params": { 00:05:48.876 "trtype": "TCP", 00:05:48.876 "max_queue_depth": 128, 00:05:48.876 "max_io_qpairs_per_ctrlr": 127, 00:05:48.876 "in_capsule_data_size": 4096, 00:05:48.876 "max_io_size": 131072, 00:05:48.876 "io_unit_size": 131072, 00:05:48.876 "max_aq_depth": 128, 00:05:48.876 "num_shared_buffers": 511, 00:05:48.876 "buf_cache_size": 4294967295, 00:05:48.876 "dif_insert_or_strip": false, 00:05:48.876 "zcopy": false, 00:05:48.876 "c2h_success": true, 00:05:48.876 "sock_priority": 0, 00:05:48.876 "abort_timeout_sec": 1, 00:05:48.876 "ack_timeout": 0, 00:05:48.876 "data_wr_pool_size": 0 00:05:48.876 } 00:05:48.876 } 00:05:48.876 ] 00:05:48.876 }, 00:05:48.876 { 00:05:48.876 "subsystem": "iscsi", 00:05:48.876 "config": [ 00:05:48.876 { 00:05:48.876 "method": "iscsi_set_options", 00:05:48.876 "params": { 00:05:48.876 "node_base": "iqn.2016-06.io.spdk", 00:05:48.876 "max_sessions": 128, 00:05:48.876 "max_connections_per_session": 2, 00:05:48.876 "max_queue_depth": 64, 00:05:48.876 "default_time2wait": 2, 00:05:48.876 "default_time2retain": 20, 00:05:48.876 "first_burst_length": 8192, 00:05:48.876 "immediate_data": true, 00:05:48.876 "allow_duplicated_isid": false, 00:05:48.876 "error_recovery_level": 0, 00:05:48.876 "nop_timeout": 60, 00:05:48.876 "nop_in_interval": 30, 00:05:48.876 "disable_chap": false, 00:05:48.876 "require_chap": false, 00:05:48.876 "mutual_chap": false, 00:05:48.876 "chap_group": 0, 00:05:48.876 "max_large_datain_per_connection": 64, 00:05:48.876 "max_r2t_per_connection": 4, 00:05:48.876 "pdu_pool_size": 36864, 00:05:48.876 "immediate_data_pool_size": 16384, 00:05:48.876 "data_out_pool_size": 2048 00:05:48.876 } 00:05:48.876 } 00:05:48.876 ] 00:05:48.876 } 00:05:48.876 ] 00:05:48.876 } 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70127 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70127 ']' 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70127 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70127 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.876 killing process with pid 70127 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70127' 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70127 00:05:48.876 00:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70127 00:05:49.137 00:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70155 00:05:49.137 00:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.137 00:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70155 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70155 ']' 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70155 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70155 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.434 killing process with pid 70155 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70155' 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70155 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70155 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.434 00:05:54.434 real 0m6.767s 00:05:54.434 user 0m6.249s 00:05:54.434 sys 0m0.751s 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.434 ************************************ 00:05:54.434 END TEST skip_rpc_with_json 00:05:54.434 ************************************ 00:05:54.434 00:37:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.434 00:37:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.434 00:37:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.435 00:37:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.435 00:37:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.435 ************************************ 00:05:54.435 START TEST skip_rpc_with_delay 00:05:54.435 ************************************ 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:54.435 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.697 [2024-11-17 00:37:46.537153] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.697 [2024-11-17 00:37:46.537291] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:54.697 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:54.697 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.697 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.697 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.697 ************************************ 00:05:54.697 END TEST skip_rpc_with_delay 00:05:54.697 ************************************ 00:05:54.697 00:05:54.697 real 0m0.124s 00:05:54.697 user 0m0.072s 00:05:54.697 sys 0m0.051s 00:05:54.697 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.697 00:37:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:54.697 00:37:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:54.697 00:37:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:54.697 00:37:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:54.697 00:37:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.697 00:37:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.697 00:37:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.697 ************************************ 00:05:54.697 START TEST exit_on_failed_rpc_init 00:05:54.697 ************************************ 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70261 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70261 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70261 ']' 00:05:54.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.697 00:37:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.697 [2024-11-17 00:37:46.737088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:54.697 [2024-11-17 00:37:46.737267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70261 ] 00:05:54.959 [2024-11-17 00:37:46.891050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.960 [2024-11-17 00:37:46.954045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:55.532 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.793 [2024-11-17 00:37:47.657313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.793 [2024-11-17 00:37:47.657461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70279 ] 00:05:55.793 [2024-11-17 00:37:47.808904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.793 [2024-11-17 00:37:47.852243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.793 [2024-11-17 00:37:47.852341] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:55.793 [2024-11-17 00:37:47.852372] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:55.793 [2024-11-17 00:37:47.852388] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70261 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70261 ']' 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70261 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70261 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.054 killing process with pid 70261 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70261' 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70261 00:05:56.054 00:37:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70261 00:05:56.315 00:05:56.315 real 0m1.639s 00:05:56.315 user 0m1.744s 00:05:56.315 sys 0m0.491s 00:05:56.315 00:37:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.315 ************************************ 00:05:56.315 END TEST exit_on_failed_rpc_init 00:05:56.315 ************************************ 00:05:56.315 00:37:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.315 00:37:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.315 00:05:56.315 real 0m14.283s 00:05:56.315 user 0m13.058s 00:05:56.315 sys 0m1.873s 00:05:56.315 00:37:48 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.315 00:37:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.315 ************************************ 00:05:56.315 END TEST skip_rpc 00:05:56.315 ************************************ 00:05:56.577 00:37:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.577 00:37:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.577 00:37:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.577 00:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:56.577 ************************************ 00:05:56.577 START TEST rpc_client 00:05:56.577 ************************************ 00:05:56.577 00:37:48 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.577 * Looking for test storage... 00:05:56.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:56.577 00:37:48 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.577 00:37:48 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.577 00:37:48 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.577 00:37:48 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.577 00:37:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.578 00:37:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:56.578 00:37:48 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.578 00:37:48 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.578 --rc genhtml_branch_coverage=1 00:05:56.578 --rc genhtml_function_coverage=1 00:05:56.578 --rc genhtml_legend=1 00:05:56.578 --rc geninfo_all_blocks=1 00:05:56.578 --rc geninfo_unexecuted_blocks=1 00:05:56.578 00:05:56.578 ' 00:05:56.578 00:37:48 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.578 --rc genhtml_branch_coverage=1 00:05:56.578 --rc genhtml_function_coverage=1 00:05:56.578 --rc genhtml_legend=1 00:05:56.578 --rc geninfo_all_blocks=1 00:05:56.578 --rc geninfo_unexecuted_blocks=1 00:05:56.578 00:05:56.578 ' 00:05:56.578 00:37:48 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.578 --rc genhtml_branch_coverage=1 00:05:56.578 --rc genhtml_function_coverage=1 00:05:56.578 --rc genhtml_legend=1 00:05:56.578 --rc geninfo_all_blocks=1 00:05:56.578 --rc geninfo_unexecuted_blocks=1 00:05:56.578 00:05:56.578 ' 00:05:56.578 00:37:48 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.578 --rc genhtml_branch_coverage=1 00:05:56.578 --rc genhtml_function_coverage=1 00:05:56.578 --rc genhtml_legend=1 00:05:56.578 --rc geninfo_all_blocks=1 00:05:56.578 --rc geninfo_unexecuted_blocks=1 00:05:56.578 00:05:56.578 ' 00:05:56.578 00:37:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:56.578 OK 00:05:56.578 00:37:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.578 00:05:56.578 real 0m0.185s 00:05:56.578 user 0m0.102s 00:05:56.578 sys 0m0.089s 00:05:56.578 00:37:48 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.578 00:37:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.578 ************************************ 00:05:56.578 END TEST rpc_client 00:05:56.578 ************************************ 00:05:56.578 00:37:48 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.578 00:37:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.578 00:37:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.578 00:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:56.578 ************************************ 00:05:56.578 START TEST json_config 00:05:56.578 ************************************ 00:05:56.578 00:37:48 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.840 00:37:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.840 00:37:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.840 00:37:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.840 00:37:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.840 00:37:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.840 00:37:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.840 00:37:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.840 00:37:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:56.840 00:37:48 json_config -- scripts/common.sh@345 -- # : 1 00:05:56.840 00:37:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.840 00:37:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.840 00:37:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:56.840 00:37:48 json_config -- scripts/common.sh@353 -- # local d=1 00:05:56.840 00:37:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.840 00:37:48 json_config -- scripts/common.sh@355 -- # echo 1 00:05:56.840 00:37:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.840 00:37:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@353 -- # local d=2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.840 00:37:48 json_config -- scripts/common.sh@355 -- # echo 2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.840 00:37:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.840 00:37:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.840 00:37:48 json_config -- scripts/common.sh@368 -- # return 0 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.840 --rc genhtml_branch_coverage=1 00:05:56.840 --rc genhtml_function_coverage=1 00:05:56.840 --rc genhtml_legend=1 00:05:56.840 --rc geninfo_all_blocks=1 00:05:56.840 --rc geninfo_unexecuted_blocks=1 00:05:56.840 00:05:56.840 ' 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.840 --rc genhtml_branch_coverage=1 00:05:56.840 --rc genhtml_function_coverage=1 00:05:56.840 --rc genhtml_legend=1 00:05:56.840 --rc geninfo_all_blocks=1 00:05:56.840 --rc geninfo_unexecuted_blocks=1 00:05:56.840 00:05:56.840 ' 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.840 --rc genhtml_branch_coverage=1 00:05:56.840 --rc genhtml_function_coverage=1 00:05:56.840 --rc genhtml_legend=1 00:05:56.840 --rc geninfo_all_blocks=1 00:05:56.840 --rc geninfo_unexecuted_blocks=1 00:05:56.840 00:05:56.840 ' 00:05:56.840 00:37:48 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.840 --rc genhtml_branch_coverage=1 00:05:56.840 --rc genhtml_function_coverage=1 00:05:56.840 --rc genhtml_legend=1 00:05:56.840 --rc geninfo_all_blocks=1 00:05:56.840 --rc geninfo_unexecuted_blocks=1 00:05:56.840 00:05:56.840 ' 00:05:56.840 00:37:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5bb242f-1de1-40dd-90e9-47f53f9db552 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e5bb242f-1de1-40dd-90e9-47f53f9db552 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.840 00:37:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.840 00:37:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.840 00:37:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.840 00:37:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.840 00:37:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.840 00:37:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.840 00:37:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.840 00:37:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.840 00:37:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@51 -- # : 0 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.840 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.840 00:37:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.840 00:37:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.841 00:37:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.841 00:37:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.841 00:37:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.841 00:37:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.841 00:37:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:56.841 WARNING: No tests are enabled so not running JSON configuration tests 00:05:56.841 00:37:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:56.841 00:05:56.841 real 0m0.162s 00:05:56.841 user 0m0.092s 00:05:56.841 sys 0m0.070s 00:05:56.841 00:37:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.841 00:37:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.841 ************************************ 00:05:56.841 END TEST json_config 00:05:56.841 ************************************ 00:05:56.841 00:37:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.841 00:37:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.841 00:37:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.841 00:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:56.841 ************************************ 00:05:56.841 START TEST json_config_extra_key 00:05:56.841 ************************************ 00:05:56.841 00:37:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.841 00:37:48 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.841 00:37:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.841 00:37:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.102 00:37:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.102 00:37:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:57.102 00:37:48 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.102 00:37:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.102 --rc genhtml_branch_coverage=1 00:05:57.102 --rc genhtml_function_coverage=1 00:05:57.102 --rc genhtml_legend=1 00:05:57.102 --rc geninfo_all_blocks=1 00:05:57.102 --rc geninfo_unexecuted_blocks=1 00:05:57.102 00:05:57.102 ' 00:05:57.102 00:37:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.102 --rc genhtml_branch_coverage=1 00:05:57.102 --rc genhtml_function_coverage=1 00:05:57.102 --rc genhtml_legend=1 00:05:57.102 --rc geninfo_all_blocks=1 00:05:57.102 --rc geninfo_unexecuted_blocks=1 00:05:57.102 00:05:57.102 ' 00:05:57.102 00:37:48 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.102 --rc genhtml_branch_coverage=1 00:05:57.102 --rc genhtml_function_coverage=1 00:05:57.102 --rc genhtml_legend=1 00:05:57.102 --rc geninfo_all_blocks=1 00:05:57.102 --rc geninfo_unexecuted_blocks=1 00:05:57.102 00:05:57.102 ' 00:05:57.102 00:37:48 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.102 --rc genhtml_branch_coverage=1 00:05:57.103 --rc genhtml_function_coverage=1 00:05:57.103 --rc genhtml_legend=1 00:05:57.103 --rc geninfo_all_blocks=1 00:05:57.103 --rc geninfo_unexecuted_blocks=1 00:05:57.103 00:05:57.103 ' 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5bb242f-1de1-40dd-90e9-47f53f9db552 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e5bb242f-1de1-40dd-90e9-47f53f9db552 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.103 00:37:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.103 00:37:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.103 00:37:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.103 00:37:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.103 00:37:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.103 00:37:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.103 00:37:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.103 00:37:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.103 00:37:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.103 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.103 00:37:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.103 INFO: launching applications... 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.103 00:37:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70462 00:05:57.103 Waiting for target to run... 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70462 /var/tmp/spdk_tgt.sock 00:05:57.103 00:37:48 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70462 ']' 00:05:57.103 00:37:48 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.103 00:37:48 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.103 00:37:48 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.103 00:37:48 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.103 00:37:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.103 00:37:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.103 [2024-11-17 00:37:49.077670] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:57.103 [2024-11-17 00:37:49.077808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70462 ] 00:05:57.671 [2024-11-17 00:37:49.544115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.671 [2024-11-17 00:37:49.578114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.930 00:37:49 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.930 00:05:57.930 00:37:49 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:57.930 00:37:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.930 INFO: shutting down applications... 00:05:57.930 00:37:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.930 00:37:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70462 ]] 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70462 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70462 00:05:57.931 00:37:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.504 00:37:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.504 00:37:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.504 00:37:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70462 00:05:58.504 00:37:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.504 00:37:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:58.504 SPDK target shutdown done 00:05:58.504 00:37:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.504 00:37:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.504 Success 00:05:58.504 00:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:58.504 00:05:58.504 real 0m1.564s 00:05:58.504 user 0m1.138s 00:05:58.504 sys 0m0.531s 00:05:58.504 00:37:50 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.504 00:37:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.504 ************************************ 00:05:58.504 END TEST json_config_extra_key 00:05:58.504 ************************************ 00:05:58.504 00:37:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.504 00:37:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.504 00:37:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.504 00:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:58.504 ************************************ 00:05:58.504 START TEST alias_rpc 00:05:58.504 ************************************ 00:05:58.504 00:37:50 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.504 * Looking for test storage... 00:05:58.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:58.504 00:37:50 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.504 00:37:50 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.504 00:37:50 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.766 00:37:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.766 --rc genhtml_branch_coverage=1 00:05:58.766 --rc genhtml_function_coverage=1 00:05:58.766 --rc genhtml_legend=1 00:05:58.766 --rc geninfo_all_blocks=1 00:05:58.766 --rc geninfo_unexecuted_blocks=1 00:05:58.766 00:05:58.766 ' 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.766 --rc genhtml_branch_coverage=1 00:05:58.766 --rc genhtml_function_coverage=1 00:05:58.766 --rc genhtml_legend=1 00:05:58.766 --rc geninfo_all_blocks=1 00:05:58.766 --rc geninfo_unexecuted_blocks=1 00:05:58.766 00:05:58.766 ' 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.766 --rc genhtml_branch_coverage=1 00:05:58.766 --rc genhtml_function_coverage=1 00:05:58.766 --rc genhtml_legend=1 00:05:58.766 --rc geninfo_all_blocks=1 00:05:58.766 --rc geninfo_unexecuted_blocks=1 00:05:58.766 00:05:58.766 ' 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.766 --rc genhtml_branch_coverage=1 00:05:58.766 --rc genhtml_function_coverage=1 00:05:58.766 --rc genhtml_legend=1 00:05:58.766 --rc geninfo_all_blocks=1 00:05:58.766 --rc geninfo_unexecuted_blocks=1 00:05:58.766 00:05:58.766 ' 00:05:58.766 00:37:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.766 00:37:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70535 00:05:58.766 00:37:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70535 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70535 ']' 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.766 00:37:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.766 00:37:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.766 [2024-11-17 00:37:50.722417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:58.766 [2024-11-17 00:37:50.722580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70535 ] 00:05:59.028 [2024-11-17 00:37:50.875639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.028 [2024-11-17 00:37:50.948154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.600 00:37:51 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.600 00:37:51 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.600 00:37:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:59.862 00:37:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70535 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70535 ']' 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70535 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70535 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.862 killing process with pid 70535 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70535' 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@969 -- # kill 70535 00:05:59.862 00:37:51 alias_rpc -- common/autotest_common.sh@974 -- # wait 70535 00:06:00.435 00:06:00.435 real 0m1.931s 00:06:00.435 user 0m1.910s 00:06:00.435 sys 0m0.569s 00:06:00.435 00:37:52 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.435 ************************************ 00:06:00.435 END TEST alias_rpc 00:06:00.435 00:37:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.435 ************************************ 00:06:00.435 00:37:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:00.435 00:37:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:00.435 00:37:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.435 00:37:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.435 00:37:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.435 ************************************ 00:06:00.435 START TEST spdkcli_tcp 00:06:00.435 ************************************ 00:06:00.435 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:00.697 * Looking for test storage... 00:06:00.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:00.697 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.697 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.697 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.697 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.697 00:37:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:00.697 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.697 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.697 --rc genhtml_branch_coverage=1 00:06:00.697 --rc genhtml_function_coverage=1 00:06:00.697 --rc genhtml_legend=1 00:06:00.697 --rc geninfo_all_blocks=1 00:06:00.697 --rc geninfo_unexecuted_blocks=1 00:06:00.697 00:06:00.697 ' 00:06:00.697 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.697 --rc genhtml_branch_coverage=1 00:06:00.697 --rc genhtml_function_coverage=1 00:06:00.697 --rc genhtml_legend=1 00:06:00.698 --rc geninfo_all_blocks=1 00:06:00.698 --rc geninfo_unexecuted_blocks=1 00:06:00.698 00:06:00.698 ' 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.698 --rc genhtml_branch_coverage=1 00:06:00.698 --rc genhtml_function_coverage=1 00:06:00.698 --rc genhtml_legend=1 00:06:00.698 --rc geninfo_all_blocks=1 00:06:00.698 --rc geninfo_unexecuted_blocks=1 00:06:00.698 00:06:00.698 ' 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.698 --rc genhtml_branch_coverage=1 00:06:00.698 --rc genhtml_function_coverage=1 00:06:00.698 --rc genhtml_legend=1 00:06:00.698 --rc geninfo_all_blocks=1 00:06:00.698 --rc geninfo_unexecuted_blocks=1 00:06:00.698 00:06:00.698 ' 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70620 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70620 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70620 ']' 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.698 00:37:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.698 00:37:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.698 [2024-11-17 00:37:52.726583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:00.698 [2024-11-17 00:37:52.726735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70620 ] 00:06:00.960 [2024-11-17 00:37:52.879846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.960 [2024-11-17 00:37:52.955341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.960 [2024-11-17 00:37:52.955400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.532 00:37:53 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.532 00:37:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:01.532 00:37:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70637 00:06:01.532 00:37:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.532 00:37:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:01.794 [ 00:06:01.794 "bdev_malloc_delete", 00:06:01.794 "bdev_malloc_create", 00:06:01.794 "bdev_null_resize", 00:06:01.794 "bdev_null_delete", 00:06:01.794 "bdev_null_create", 00:06:01.794 "bdev_nvme_cuse_unregister", 00:06:01.794 "bdev_nvme_cuse_register", 00:06:01.794 "bdev_opal_new_user", 00:06:01.794 "bdev_opal_set_lock_state", 00:06:01.794 "bdev_opal_delete", 00:06:01.794 "bdev_opal_get_info", 00:06:01.794 "bdev_opal_create", 00:06:01.794 "bdev_nvme_opal_revert", 00:06:01.794 "bdev_nvme_opal_init", 00:06:01.794 "bdev_nvme_send_cmd", 00:06:01.794 "bdev_nvme_set_keys", 00:06:01.794 "bdev_nvme_get_path_iostat", 00:06:01.794 "bdev_nvme_get_mdns_discovery_info", 00:06:01.794 "bdev_nvme_stop_mdns_discovery", 00:06:01.794 "bdev_nvme_start_mdns_discovery", 00:06:01.794 "bdev_nvme_set_multipath_policy", 00:06:01.794 "bdev_nvme_set_preferred_path", 00:06:01.794 "bdev_nvme_get_io_paths", 00:06:01.794 "bdev_nvme_remove_error_injection", 00:06:01.794 "bdev_nvme_add_error_injection", 00:06:01.794 "bdev_nvme_get_discovery_info", 00:06:01.794 "bdev_nvme_stop_discovery", 00:06:01.794 "bdev_nvme_start_discovery", 00:06:01.794 "bdev_nvme_get_controller_health_info", 00:06:01.794 "bdev_nvme_disable_controller", 00:06:01.794 "bdev_nvme_enable_controller", 00:06:01.794 "bdev_nvme_reset_controller", 00:06:01.794 "bdev_nvme_get_transport_statistics", 00:06:01.794 "bdev_nvme_apply_firmware", 00:06:01.794 "bdev_nvme_detach_controller", 00:06:01.794 "bdev_nvme_get_controllers", 00:06:01.794 "bdev_nvme_attach_controller", 00:06:01.794 "bdev_nvme_set_hotplug", 00:06:01.794 "bdev_nvme_set_options", 00:06:01.794 "bdev_passthru_delete", 00:06:01.794 "bdev_passthru_create", 00:06:01.794 "bdev_lvol_set_parent_bdev", 00:06:01.794 "bdev_lvol_set_parent", 00:06:01.794 "bdev_lvol_check_shallow_copy", 00:06:01.794 "bdev_lvol_start_shallow_copy", 00:06:01.794 "bdev_lvol_grow_lvstore", 00:06:01.794 "bdev_lvol_get_lvols", 00:06:01.794 "bdev_lvol_get_lvstores", 00:06:01.794 "bdev_lvol_delete", 00:06:01.794 "bdev_lvol_set_read_only", 00:06:01.794 "bdev_lvol_resize", 00:06:01.794 "bdev_lvol_decouple_parent", 00:06:01.794 "bdev_lvol_inflate", 00:06:01.794 "bdev_lvol_rename", 00:06:01.794 "bdev_lvol_clone_bdev", 00:06:01.794 "bdev_lvol_clone", 00:06:01.794 "bdev_lvol_snapshot", 00:06:01.794 "bdev_lvol_create", 00:06:01.794 "bdev_lvol_delete_lvstore", 00:06:01.794 "bdev_lvol_rename_lvstore", 00:06:01.794 "bdev_lvol_create_lvstore", 00:06:01.794 "bdev_raid_set_options", 00:06:01.794 "bdev_raid_remove_base_bdev", 00:06:01.794 "bdev_raid_add_base_bdev", 00:06:01.794 "bdev_raid_delete", 00:06:01.794 "bdev_raid_create", 00:06:01.794 "bdev_raid_get_bdevs", 00:06:01.794 "bdev_error_inject_error", 00:06:01.794 "bdev_error_delete", 00:06:01.794 "bdev_error_create", 00:06:01.794 "bdev_split_delete", 00:06:01.794 "bdev_split_create", 00:06:01.794 "bdev_delay_delete", 00:06:01.794 "bdev_delay_create", 00:06:01.794 "bdev_delay_update_latency", 00:06:01.794 "bdev_zone_block_delete", 00:06:01.794 "bdev_zone_block_create", 00:06:01.794 "blobfs_create", 00:06:01.794 "blobfs_detect", 00:06:01.794 "blobfs_set_cache_size", 00:06:01.794 "bdev_xnvme_delete", 00:06:01.794 "bdev_xnvme_create", 00:06:01.794 "bdev_aio_delete", 00:06:01.794 "bdev_aio_rescan", 00:06:01.794 "bdev_aio_create", 00:06:01.794 "bdev_ftl_set_property", 00:06:01.794 "bdev_ftl_get_properties", 00:06:01.794 "bdev_ftl_get_stats", 00:06:01.794 "bdev_ftl_unmap", 00:06:01.794 "bdev_ftl_unload", 00:06:01.794 "bdev_ftl_delete", 00:06:01.794 "bdev_ftl_load", 00:06:01.794 "bdev_ftl_create", 00:06:01.794 "bdev_virtio_attach_controller", 00:06:01.794 "bdev_virtio_scsi_get_devices", 00:06:01.794 "bdev_virtio_detach_controller", 00:06:01.794 "bdev_virtio_blk_set_hotplug", 00:06:01.794 "bdev_iscsi_delete", 00:06:01.794 "bdev_iscsi_create", 00:06:01.794 "bdev_iscsi_set_options", 00:06:01.794 "accel_error_inject_error", 00:06:01.794 "ioat_scan_accel_module", 00:06:01.794 "dsa_scan_accel_module", 00:06:01.794 "iaa_scan_accel_module", 00:06:01.794 "keyring_file_remove_key", 00:06:01.794 "keyring_file_add_key", 00:06:01.794 "keyring_linux_set_options", 00:06:01.794 "fsdev_aio_delete", 00:06:01.794 "fsdev_aio_create", 00:06:01.794 "iscsi_get_histogram", 00:06:01.794 "iscsi_enable_histogram", 00:06:01.794 "iscsi_set_options", 00:06:01.794 "iscsi_get_auth_groups", 00:06:01.794 "iscsi_auth_group_remove_secret", 00:06:01.794 "iscsi_auth_group_add_secret", 00:06:01.794 "iscsi_delete_auth_group", 00:06:01.794 "iscsi_create_auth_group", 00:06:01.794 "iscsi_set_discovery_auth", 00:06:01.794 "iscsi_get_options", 00:06:01.794 "iscsi_target_node_request_logout", 00:06:01.794 "iscsi_target_node_set_redirect", 00:06:01.794 "iscsi_target_node_set_auth", 00:06:01.795 "iscsi_target_node_add_lun", 00:06:01.795 "iscsi_get_stats", 00:06:01.795 "iscsi_get_connections", 00:06:01.795 "iscsi_portal_group_set_auth", 00:06:01.795 "iscsi_start_portal_group", 00:06:01.795 "iscsi_delete_portal_group", 00:06:01.795 "iscsi_create_portal_group", 00:06:01.795 "iscsi_get_portal_groups", 00:06:01.795 "iscsi_delete_target_node", 00:06:01.795 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.795 "iscsi_target_node_add_pg_ig_maps", 00:06:01.795 "iscsi_create_target_node", 00:06:01.795 "iscsi_get_target_nodes", 00:06:01.795 "iscsi_delete_initiator_group", 00:06:01.795 "iscsi_initiator_group_remove_initiators", 00:06:01.795 "iscsi_initiator_group_add_initiators", 00:06:01.795 "iscsi_create_initiator_group", 00:06:01.795 "iscsi_get_initiator_groups", 00:06:01.795 "nvmf_set_crdt", 00:06:01.795 "nvmf_set_config", 00:06:01.795 "nvmf_set_max_subsystems", 00:06:01.795 "nvmf_stop_mdns_prr", 00:06:01.795 "nvmf_publish_mdns_prr", 00:06:01.795 "nvmf_subsystem_get_listeners", 00:06:01.795 "nvmf_subsystem_get_qpairs", 00:06:01.795 "nvmf_subsystem_get_controllers", 00:06:01.795 "nvmf_get_stats", 00:06:01.795 "nvmf_get_transports", 00:06:01.795 "nvmf_create_transport", 00:06:01.795 "nvmf_get_targets", 00:06:01.795 "nvmf_delete_target", 00:06:01.795 "nvmf_create_target", 00:06:01.795 "nvmf_subsystem_allow_any_host", 00:06:01.795 "nvmf_subsystem_set_keys", 00:06:01.795 "nvmf_subsystem_remove_host", 00:06:01.795 "nvmf_subsystem_add_host", 00:06:01.795 "nvmf_ns_remove_host", 00:06:01.795 "nvmf_ns_add_host", 00:06:01.795 "nvmf_subsystem_remove_ns", 00:06:01.795 "nvmf_subsystem_set_ns_ana_group", 00:06:01.795 "nvmf_subsystem_add_ns", 00:06:01.795 "nvmf_subsystem_listener_set_ana_state", 00:06:01.795 "nvmf_discovery_get_referrals", 00:06:01.795 "nvmf_discovery_remove_referral", 00:06:01.795 "nvmf_discovery_add_referral", 00:06:01.795 "nvmf_subsystem_remove_listener", 00:06:01.795 "nvmf_subsystem_add_listener", 00:06:01.795 "nvmf_delete_subsystem", 00:06:01.795 "nvmf_create_subsystem", 00:06:01.795 "nvmf_get_subsystems", 00:06:01.795 "env_dpdk_get_mem_stats", 00:06:01.795 "nbd_get_disks", 00:06:01.795 "nbd_stop_disk", 00:06:01.795 "nbd_start_disk", 00:06:01.795 "ublk_recover_disk", 00:06:01.795 "ublk_get_disks", 00:06:01.795 "ublk_stop_disk", 00:06:01.795 "ublk_start_disk", 00:06:01.795 "ublk_destroy_target", 00:06:01.795 "ublk_create_target", 00:06:01.795 "virtio_blk_create_transport", 00:06:01.795 "virtio_blk_get_transports", 00:06:01.795 "vhost_controller_set_coalescing", 00:06:01.795 "vhost_get_controllers", 00:06:01.795 "vhost_delete_controller", 00:06:01.795 "vhost_create_blk_controller", 00:06:01.795 "vhost_scsi_controller_remove_target", 00:06:01.795 "vhost_scsi_controller_add_target", 00:06:01.795 "vhost_start_scsi_controller", 00:06:01.795 "vhost_create_scsi_controller", 00:06:01.795 "thread_set_cpumask", 00:06:01.795 "scheduler_set_options", 00:06:01.795 "framework_get_governor", 00:06:01.795 "framework_get_scheduler", 00:06:01.795 "framework_set_scheduler", 00:06:01.795 "framework_get_reactors", 00:06:01.795 "thread_get_io_channels", 00:06:01.795 "thread_get_pollers", 00:06:01.795 "thread_get_stats", 00:06:01.795 "framework_monitor_context_switch", 00:06:01.795 "spdk_kill_instance", 00:06:01.795 "log_enable_timestamps", 00:06:01.795 "log_get_flags", 00:06:01.795 "log_clear_flag", 00:06:01.795 "log_set_flag", 00:06:01.795 "log_get_level", 00:06:01.795 "log_set_level", 00:06:01.795 "log_get_print_level", 00:06:01.795 "log_set_print_level", 00:06:01.795 "framework_enable_cpumask_locks", 00:06:01.795 "framework_disable_cpumask_locks", 00:06:01.795 "framework_wait_init", 00:06:01.795 "framework_start_init", 00:06:01.795 "scsi_get_devices", 00:06:01.795 "bdev_get_histogram", 00:06:01.795 "bdev_enable_histogram", 00:06:01.795 "bdev_set_qos_limit", 00:06:01.795 "bdev_set_qd_sampling_period", 00:06:01.795 "bdev_get_bdevs", 00:06:01.795 "bdev_reset_iostat", 00:06:01.795 "bdev_get_iostat", 00:06:01.795 "bdev_examine", 00:06:01.795 "bdev_wait_for_examine", 00:06:01.795 "bdev_set_options", 00:06:01.795 "accel_get_stats", 00:06:01.795 "accel_set_options", 00:06:01.795 "accel_set_driver", 00:06:01.795 "accel_crypto_key_destroy", 00:06:01.795 "accel_crypto_keys_get", 00:06:01.795 "accel_crypto_key_create", 00:06:01.795 "accel_assign_opc", 00:06:01.795 "accel_get_module_info", 00:06:01.795 "accel_get_opc_assignments", 00:06:01.795 "vmd_rescan", 00:06:01.795 "vmd_remove_device", 00:06:01.795 "vmd_enable", 00:06:01.795 "sock_get_default_impl", 00:06:01.795 "sock_set_default_impl", 00:06:01.795 "sock_impl_set_options", 00:06:01.795 "sock_impl_get_options", 00:06:01.795 "iobuf_get_stats", 00:06:01.795 "iobuf_set_options", 00:06:01.795 "keyring_get_keys", 00:06:01.795 "framework_get_pci_devices", 00:06:01.795 "framework_get_config", 00:06:01.795 "framework_get_subsystems", 00:06:01.795 "fsdev_set_opts", 00:06:01.795 "fsdev_get_opts", 00:06:01.795 "trace_get_info", 00:06:01.795 "trace_get_tpoint_group_mask", 00:06:01.795 "trace_disable_tpoint_group", 00:06:01.795 "trace_enable_tpoint_group", 00:06:01.795 "trace_clear_tpoint_mask", 00:06:01.795 "trace_set_tpoint_mask", 00:06:01.795 "notify_get_notifications", 00:06:01.795 "notify_get_types", 00:06:01.795 "spdk_get_version", 00:06:01.795 "rpc_get_methods" 00:06:01.795 ] 00:06:01.795 00:37:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.795 00:37:53 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:01.795 00:37:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.795 00:37:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.795 00:37:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70620 00:06:01.795 00:37:53 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70620 ']' 00:06:01.795 00:37:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70620 00:06:01.795 00:37:53 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:02.057 00:37:53 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.057 00:37:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70620 00:06:02.057 00:37:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.057 killing process with pid 70620 00:06:02.057 00:37:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.057 00:37:53 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70620' 00:06:02.057 00:37:53 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70620 00:06:02.057 00:37:53 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70620 00:06:02.630 00:06:02.630 real 0m1.960s 00:06:02.630 user 0m3.227s 00:06:02.630 sys 0m0.634s 00:06:02.630 00:37:54 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.630 ************************************ 00:06:02.630 END TEST spdkcli_tcp 00:06:02.630 ************************************ 00:06:02.630 00:37:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.630 00:37:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.630 00:37:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.630 00:37:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.630 00:37:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.630 ************************************ 00:06:02.630 START TEST dpdk_mem_utility 00:06:02.630 ************************************ 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.630 * Looking for test storage... 00:06:02.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.630 00:37:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.630 --rc genhtml_branch_coverage=1 00:06:02.630 --rc genhtml_function_coverage=1 00:06:02.630 --rc genhtml_legend=1 00:06:02.630 --rc geninfo_all_blocks=1 00:06:02.630 --rc geninfo_unexecuted_blocks=1 00:06:02.630 00:06:02.630 ' 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.630 --rc genhtml_branch_coverage=1 00:06:02.630 --rc genhtml_function_coverage=1 00:06:02.630 --rc genhtml_legend=1 00:06:02.630 --rc geninfo_all_blocks=1 00:06:02.630 --rc geninfo_unexecuted_blocks=1 00:06:02.630 00:06:02.630 ' 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.630 --rc genhtml_branch_coverage=1 00:06:02.630 --rc genhtml_function_coverage=1 00:06:02.630 --rc genhtml_legend=1 00:06:02.630 --rc geninfo_all_blocks=1 00:06:02.630 --rc geninfo_unexecuted_blocks=1 00:06:02.630 00:06:02.630 ' 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.630 --rc genhtml_branch_coverage=1 00:06:02.630 --rc genhtml_function_coverage=1 00:06:02.630 --rc genhtml_legend=1 00:06:02.630 --rc geninfo_all_blocks=1 00:06:02.630 --rc geninfo_unexecuted_blocks=1 00:06:02.630 00:06:02.630 ' 00:06:02.630 00:37:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.630 00:37:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70720 00:06:02.630 00:37:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70720 00:06:02.630 00:37:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70720 ']' 00:06:02.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.630 00:37:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.892 [2024-11-17 00:37:54.742021] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.892 [2024-11-17 00:37:54.742182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70720 ] 00:06:02.892 [2024-11-17 00:37:54.892303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.153 [2024-11-17 00:37:54.966619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.728 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.728 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:03.728 00:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:03.728 00:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:03.728 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.728 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 { 00:06:03.728 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.728 } 00:06:03.728 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.728 00:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:03.728 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:03.728 1 heaps totaling size 860.000000 MiB 00:06:03.728 size: 860.000000 MiB heap id: 0 00:06:03.728 end heaps---------- 00:06:03.728 9 mempools totaling size 642.649841 MiB 00:06:03.728 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.728 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.728 size: 92.545471 MiB name: bdev_io_70720 00:06:03.728 size: 51.011292 MiB name: evtpool_70720 00:06:03.728 size: 50.003479 MiB name: msgpool_70720 00:06:03.728 size: 36.509338 MiB name: fsdev_io_70720 00:06:03.728 size: 21.763794 MiB name: PDU_Pool 00:06:03.728 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.728 size: 0.026123 MiB name: Session_Pool 00:06:03.728 end mempools------- 00:06:03.728 6 memzones totaling size 4.142822 MiB 00:06:03.728 size: 1.000366 MiB name: RG_ring_0_70720 00:06:03.728 size: 1.000366 MiB name: RG_ring_1_70720 00:06:03.728 size: 1.000366 MiB name: RG_ring_4_70720 00:06:03.728 size: 1.000366 MiB name: RG_ring_5_70720 00:06:03.728 size: 0.125366 MiB name: RG_ring_2_70720 00:06:03.728 size: 0.015991 MiB name: RG_ring_3_70720 00:06:03.728 end memzones------- 00:06:03.728 00:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.728 heap id: 0 total size: 860.000000 MiB number of busy elements: 322 number of free elements: 16 00:06:03.728 list of free elements. size: 13.933777 MiB 00:06:03.728 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:03.728 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:03.728 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:03.728 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:03.728 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:03.728 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:03.728 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:03.728 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:03.728 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:03.728 element at address: 0x20001d800000 with size: 0.566956 MiB 00:06:03.728 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:03.728 element at address: 0x200003e00000 with size: 0.487183 MiB 00:06:03.728 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:03.728 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:03.728 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:06:03.728 element at address: 0x200003a00000 with size: 0.352112 MiB 00:06:03.728 list of standard malloc elements. size: 199.269531 MiB 00:06:03.728 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:03.728 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:03.728 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:03.728 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:03.728 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:03.728 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:03.728 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:03.728 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:03.728 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:03.728 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:03.728 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:03.729 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891240 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:03.729 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:03.730 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:03.730 list of memzone associated elements. size: 646.796692 MiB 00:06:03.730 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:03.730 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.730 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:03.730 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.730 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:03.730 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70720_0 00:06:03.730 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:03.730 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70720_0 00:06:03.730 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:03.730 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70720_0 00:06:03.730 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:03.730 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70720_0 00:06:03.730 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:03.730 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.730 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:03.730 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.730 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:03.730 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70720 00:06:03.730 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:03.731 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70720 00:06:03.731 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:03.731 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70720 00:06:03.731 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:03.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.731 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:03.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.731 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:03.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.731 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:03.731 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.731 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:03.731 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70720 00:06:03.731 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:03.731 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70720 00:06:03.731 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:03.731 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70720 00:06:03.731 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:03.731 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70720 00:06:03.731 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:03.731 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70720 00:06:03.731 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:03.731 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70720 00:06:03.731 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:03.731 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.731 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:03.731 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.731 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:03.731 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.731 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:06:03.731 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70720 00:06:03.731 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:03.731 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.731 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:06:03.731 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.731 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:06:03.731 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70720 00:06:03.731 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:06:03.731 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.731 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:03.731 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70720 00:06:03.731 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:03.731 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70720 00:06:03.731 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:06:03.731 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70720 00:06:03.731 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:06:03.731 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.731 00:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.731 00:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70720 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70720 ']' 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70720 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70720 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.731 killing process with pid 70720 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70720' 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70720 00:06:03.731 00:37:55 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70720 00:06:04.303 00:06:04.303 real 0m1.770s 00:06:04.303 user 0m1.625s 00:06:04.303 sys 0m0.585s 00:06:04.303 00:37:56 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.303 ************************************ 00:06:04.303 END TEST dpdk_mem_utility 00:06:04.303 00:37:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.303 ************************************ 00:06:04.303 00:37:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:04.303 00:37:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.303 00:37:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.303 00:37:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.303 ************************************ 00:06:04.303 START TEST event 00:06:04.303 ************************************ 00:06:04.303 00:37:56 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:04.565 * Looking for test storage... 00:06:04.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.565 00:37:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.565 00:37:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.565 00:37:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.565 00:37:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.565 00:37:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.565 00:37:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.565 00:37:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.565 00:37:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.565 00:37:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.565 00:37:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.565 00:37:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.565 00:37:56 event -- scripts/common.sh@344 -- # case "$op" in 00:06:04.565 00:37:56 event -- scripts/common.sh@345 -- # : 1 00:06:04.565 00:37:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.565 00:37:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.565 00:37:56 event -- scripts/common.sh@365 -- # decimal 1 00:06:04.565 00:37:56 event -- scripts/common.sh@353 -- # local d=1 00:06:04.565 00:37:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.565 00:37:56 event -- scripts/common.sh@355 -- # echo 1 00:06:04.565 00:37:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.565 00:37:56 event -- scripts/common.sh@366 -- # decimal 2 00:06:04.565 00:37:56 event -- scripts/common.sh@353 -- # local d=2 00:06:04.565 00:37:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.565 00:37:56 event -- scripts/common.sh@355 -- # echo 2 00:06:04.565 00:37:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.565 00:37:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.565 00:37:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.565 00:37:56 event -- scripts/common.sh@368 -- # return 0 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.565 --rc genhtml_branch_coverage=1 00:06:04.565 --rc genhtml_function_coverage=1 00:06:04.565 --rc genhtml_legend=1 00:06:04.565 --rc geninfo_all_blocks=1 00:06:04.565 --rc geninfo_unexecuted_blocks=1 00:06:04.565 00:06:04.565 ' 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.565 --rc genhtml_branch_coverage=1 00:06:04.565 --rc genhtml_function_coverage=1 00:06:04.565 --rc genhtml_legend=1 00:06:04.565 --rc geninfo_all_blocks=1 00:06:04.565 --rc geninfo_unexecuted_blocks=1 00:06:04.565 00:06:04.565 ' 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.565 --rc genhtml_branch_coverage=1 00:06:04.565 --rc genhtml_function_coverage=1 00:06:04.565 --rc genhtml_legend=1 00:06:04.565 --rc geninfo_all_blocks=1 00:06:04.565 --rc geninfo_unexecuted_blocks=1 00:06:04.565 00:06:04.565 ' 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.565 --rc genhtml_branch_coverage=1 00:06:04.565 --rc genhtml_function_coverage=1 00:06:04.565 --rc genhtml_legend=1 00:06:04.565 --rc geninfo_all_blocks=1 00:06:04.565 --rc geninfo_unexecuted_blocks=1 00:06:04.565 00:06:04.565 ' 00:06:04.565 00:37:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:04.565 00:37:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.565 00:37:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:04.565 00:37:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.565 00:37:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.565 ************************************ 00:06:04.565 START TEST event_perf 00:06:04.565 ************************************ 00:06:04.565 00:37:56 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.565 Running I/O for 1 seconds...[2024-11-17 00:37:56.537938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:04.565 [2024-11-17 00:37:56.538070] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70806 ] 00:06:04.826 [2024-11-17 00:37:56.690758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.826 [2024-11-17 00:37:56.766705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.826 [2024-11-17 00:37:56.766999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.826 Running I/O for 1 seconds...[2024-11-17 00:37:56.767391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.826 [2024-11-17 00:37:56.767460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.760 00:06:05.760 lcore 0: 149029 00:06:05.760 lcore 1: 149030 00:06:05.760 lcore 2: 149032 00:06:05.760 lcore 3: 149036 00:06:06.018 done. 00:06:06.018 00:06:06.018 real 0m1.336s 00:06:06.018 user 0m4.101s 00:06:06.018 sys 0m0.115s 00:06:06.018 00:37:57 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.018 00:37:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.018 ************************************ 00:06:06.018 END TEST event_perf 00:06:06.018 ************************************ 00:06:06.018 00:37:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:06.018 00:37:57 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:06.018 00:37:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.018 00:37:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.018 ************************************ 00:06:06.018 START TEST event_reactor 00:06:06.018 ************************************ 00:06:06.018 00:37:57 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:06.018 [2024-11-17 00:37:57.917533] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:06.018 [2024-11-17 00:37:57.917635] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70840 ] 00:06:06.018 [2024-11-17 00:37:58.060492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.276 [2024-11-17 00:37:58.101274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.217 test_start 00:06:07.217 oneshot 00:06:07.217 tick 100 00:06:07.217 tick 100 00:06:07.217 tick 250 00:06:07.217 tick 100 00:06:07.217 tick 100 00:06:07.217 tick 250 00:06:07.217 tick 500 00:06:07.217 tick 100 00:06:07.217 tick 100 00:06:07.217 tick 100 00:06:07.217 tick 250 00:06:07.217 tick 100 00:06:07.217 tick 100 00:06:07.217 test_end 00:06:07.217 00:06:07.217 real 0m1.331s 00:06:07.217 user 0m1.148s 00:06:07.217 sys 0m0.074s 00:06:07.217 00:37:59 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.217 ************************************ 00:06:07.217 END TEST event_reactor 00:06:07.217 ************************************ 00:06:07.217 00:37:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:07.478 00:37:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.478 00:37:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:07.478 00:37:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.478 00:37:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.478 ************************************ 00:06:07.478 START TEST event_reactor_perf 00:06:07.478 ************************************ 00:06:07.478 00:37:59 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.478 [2024-11-17 00:37:59.328928] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:07.478 [2024-11-17 00:37:59.329064] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:06:07.478 [2024-11-17 00:37:59.477411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.739 [2024-11-17 00:37:59.548946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.774 test_start 00:06:08.774 test_end 00:06:08.774 Performance: 309202 events per second 00:06:08.774 00:06:08.774 real 0m1.320s 00:06:08.774 user 0m1.102s 00:06:08.774 sys 0m0.109s 00:06:08.774 00:38:00 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.774 ************************************ 00:06:08.774 END TEST event_reactor_perf 00:06:08.774 ************************************ 00:06:08.774 00:38:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 00:38:00 event -- event/event.sh@49 -- # uname -s 00:06:08.774 00:38:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:08.774 00:38:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.774 00:38:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.774 00:38:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.774 00:38:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 ************************************ 00:06:08.774 START TEST event_scheduler 00:06:08.774 ************************************ 00:06:08.774 00:38:00 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.774 * Looking for test storage... 00:06:08.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:08.774 00:38:00 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.774 00:38:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.774 00:38:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.774 00:38:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.774 00:38:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:09.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.035 00:38:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.035 00:38:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.035 00:38:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.035 00:38:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.035 --rc genhtml_branch_coverage=1 00:06:09.035 --rc genhtml_function_coverage=1 00:06:09.035 --rc genhtml_legend=1 00:06:09.035 --rc geninfo_all_blocks=1 00:06:09.035 --rc geninfo_unexecuted_blocks=1 00:06:09.035 00:06:09.035 ' 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.035 --rc genhtml_branch_coverage=1 00:06:09.035 --rc genhtml_function_coverage=1 00:06:09.035 --rc genhtml_legend=1 00:06:09.035 --rc geninfo_all_blocks=1 00:06:09.035 --rc geninfo_unexecuted_blocks=1 00:06:09.035 00:06:09.035 ' 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.035 --rc genhtml_branch_coverage=1 00:06:09.035 --rc genhtml_function_coverage=1 00:06:09.035 --rc genhtml_legend=1 00:06:09.035 --rc geninfo_all_blocks=1 00:06:09.035 --rc geninfo_unexecuted_blocks=1 00:06:09.035 00:06:09.035 ' 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.035 --rc genhtml_branch_coverage=1 00:06:09.035 --rc genhtml_function_coverage=1 00:06:09.035 --rc genhtml_legend=1 00:06:09.035 --rc geninfo_all_blocks=1 00:06:09.035 --rc geninfo_unexecuted_blocks=1 00:06:09.035 00:06:09.035 ' 00:06:09.035 00:38:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:09.035 00:38:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70947 00:06:09.035 00:38:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.035 00:38:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70947 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70947 ']' 00:06:09.035 00:38:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.035 00:38:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.035 [2024-11-17 00:38:00.902659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:09.035 [2024-11-17 00:38:00.902791] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70947 ] 00:06:09.035 [2024-11-17 00:38:01.047791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.297 [2024-11-17 00:38:01.098439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.297 [2024-11-17 00:38:01.098674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.297 [2024-11-17 00:38:01.098920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.297 [2024-11-17 00:38:01.098979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.867 00:38:01 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.867 00:38:01 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:09.867 00:38:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:09.867 00:38:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.867 00:38:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.867 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.867 POWER: Cannot set governor of lcore 0 to userspace 00:06:09.867 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.867 POWER: Cannot set governor of lcore 0 to performance 00:06:09.867 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.867 POWER: Cannot set governor of lcore 0 to userspace 00:06:09.867 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.867 POWER: Cannot set governor of lcore 0 to userspace 00:06:09.867 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:09.867 POWER: Unable to set Power Management Environment for lcore 0 00:06:09.867 [2024-11-17 00:38:01.781021] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:09.867 [2024-11-17 00:38:01.781050] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:09.867 [2024-11-17 00:38:01.781061] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:09.867 [2024-11-17 00:38:01.781098] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:09.867 [2024-11-17 00:38:01.781108] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:09.868 [2024-11-17 00:38:01.781120] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:09.868 00:38:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.868 00:38:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:09.868 00:38:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.868 00:38:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.868 [2024-11-17 00:38:01.896385] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:09.868 00:38:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.868 00:38:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:09.868 00:38:01 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.868 00:38:01 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.868 00:38:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.868 ************************************ 00:06:09.868 START TEST scheduler_create_thread 00:06:09.868 ************************************ 00:06:09.868 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:09.868 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:09.868 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.868 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 2 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 3 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 4 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 5 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 6 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 7 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 8 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 9 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 10 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.128 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.072 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.072 00:38:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.072 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.072 00:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.452 00:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.452 00:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:12.452 00:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:12.452 00:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.452 00:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.392 ************************************ 00:06:13.392 END TEST scheduler_create_thread 00:06:13.392 ************************************ 00:06:13.392 00:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.392 00:06:13.392 real 0m3.374s 00:06:13.392 user 0m0.014s 00:06:13.392 sys 0m0.009s 00:06:13.392 00:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.392 00:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.392 00:38:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:13.392 00:38:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70947 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70947 ']' 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70947 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70947 00:06:13.392 killing process with pid 70947 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70947' 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70947 00:06:13.392 00:38:05 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70947 00:06:13.653 [2024-11-17 00:38:05.668274] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:13.914 00:06:13.914 real 0m5.219s 00:06:13.914 user 0m10.338s 00:06:13.914 sys 0m0.436s 00:06:13.914 00:38:05 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.914 00:38:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.914 ************************************ 00:06:13.914 END TEST event_scheduler 00:06:13.914 ************************************ 00:06:13.914 00:38:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:13.914 00:38:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:13.914 00:38:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.914 00:38:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.914 00:38:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.914 ************************************ 00:06:13.914 START TEST app_repeat 00:06:13.914 ************************************ 00:06:13.914 00:38:05 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71053 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:13.914 Process app_repeat pid: 71053 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71053' 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.914 spdk_app_start Round 0 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:13.914 00:38:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71053 /var/tmp/spdk-nbd.sock 00:06:13.914 00:38:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71053 ']' 00:06:13.914 00:38:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.914 00:38:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.914 00:38:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.914 00:38:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.914 00:38:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.175 [2024-11-17 00:38:06.005442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.175 [2024-11-17 00:38:06.005573] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71053 ] 00:06:14.175 [2024-11-17 00:38:06.156018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.175 [2024-11-17 00:38:06.216852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.175 [2024-11-17 00:38:06.216921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.118 00:38:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.118 00:38:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.118 00:38:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.118 Malloc0 00:06:15.118 00:38:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.378 Malloc1 00:06:15.378 00:38:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.378 00:38:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.639 /dev/nbd0 00:06:15.639 00:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.639 00:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.639 1+0 records in 00:06:15.639 1+0 records out 00:06:15.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253486 s, 16.2 MB/s 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.639 00:38:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.639 00:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.639 00:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.639 00:38:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.900 /dev/nbd1 00:06:15.900 00:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.900 00:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.900 1+0 records in 00:06:15.900 1+0 records out 00:06:15.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021394 s, 19.1 MB/s 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.900 00:38:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.900 00:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.900 00:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.900 00:38:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.900 00:38:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.900 00:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.160 00:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.160 { 00:06:16.160 "nbd_device": "/dev/nbd0", 00:06:16.160 "bdev_name": "Malloc0" 00:06:16.160 }, 00:06:16.160 { 00:06:16.160 "nbd_device": "/dev/nbd1", 00:06:16.160 "bdev_name": "Malloc1" 00:06:16.160 } 00:06:16.160 ]' 00:06:16.160 00:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.160 { 00:06:16.161 "nbd_device": "/dev/nbd0", 00:06:16.161 "bdev_name": "Malloc0" 00:06:16.161 }, 00:06:16.161 { 00:06:16.161 "nbd_device": "/dev/nbd1", 00:06:16.161 "bdev_name": "Malloc1" 00:06:16.161 } 00:06:16.161 ]' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.161 /dev/nbd1' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.161 /dev/nbd1' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.161 256+0 records in 00:06:16.161 256+0 records out 00:06:16.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107597 s, 97.5 MB/s 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.161 256+0 records in 00:06:16.161 256+0 records out 00:06:16.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158841 s, 66.0 MB/s 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.161 256+0 records in 00:06:16.161 256+0 records out 00:06:16.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155891 s, 67.3 MB/s 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.161 00:38:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.422 00:38:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.684 00:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.945 00:38:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.945 00:38:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.206 00:38:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.206 [2024-11-17 00:38:09.231883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.466 [2024-11-17 00:38:09.269178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.466 [2024-11-17 00:38:09.269184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.466 [2024-11-17 00:38:09.309284] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.466 [2024-11-17 00:38:09.309331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.764 00:38:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.764 00:38:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:20.764 spdk_app_start Round 1 00:06:20.764 00:38:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71053 /var/tmp/spdk-nbd.sock 00:06:20.764 00:38:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71053 ']' 00:06:20.764 00:38:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.764 00:38:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.765 00:38:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.765 00:38:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.765 00:38:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.765 00:38:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.765 00:38:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.765 00:38:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.765 Malloc0 00:06:20.765 00:38:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.765 Malloc1 00:06:20.765 00:38:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.765 00:38:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.024 /dev/nbd0 00:06:21.024 00:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.024 00:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.024 1+0 records in 00:06:21.024 1+0 records out 00:06:21.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349152 s, 11.7 MB/s 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.024 00:38:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.024 00:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.024 00:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.024 00:38:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.284 /dev/nbd1 00:06:21.284 00:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.284 00:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.284 00:38:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.285 1+0 records in 00:06:21.285 1+0 records out 00:06:21.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030396 s, 13.5 MB/s 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.285 00:38:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.285 00:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.285 00:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.285 00:38:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.285 00:38:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.285 00:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.547 { 00:06:21.547 "nbd_device": "/dev/nbd0", 00:06:21.547 "bdev_name": "Malloc0" 00:06:21.547 }, 00:06:21.547 { 00:06:21.547 "nbd_device": "/dev/nbd1", 00:06:21.547 "bdev_name": "Malloc1" 00:06:21.547 } 00:06:21.547 ]' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.547 { 00:06:21.547 "nbd_device": "/dev/nbd0", 00:06:21.547 "bdev_name": "Malloc0" 00:06:21.547 }, 00:06:21.547 { 00:06:21.547 "nbd_device": "/dev/nbd1", 00:06:21.547 "bdev_name": "Malloc1" 00:06:21.547 } 00:06:21.547 ]' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.547 /dev/nbd1' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.547 /dev/nbd1' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.547 256+0 records in 00:06:21.547 256+0 records out 00:06:21.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00777985 s, 135 MB/s 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.547 256+0 records in 00:06:21.547 256+0 records out 00:06:21.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175115 s, 59.9 MB/s 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.547 256+0 records in 00:06:21.547 256+0 records out 00:06:21.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192759 s, 54.4 MB/s 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.547 00:38:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.808 00:38:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.069 00:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.330 00:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.331 00:38:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.331 00:38:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.592 00:38:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.592 [2024-11-17 00:38:14.553374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.592 [2024-11-17 00:38:14.590589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.592 [2024-11-17 00:38:14.590665] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.592 [2024-11-17 00:38:14.631782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.592 [2024-11-17 00:38:14.631839] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.930 spdk_app_start Round 2 00:06:25.930 00:38:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.930 00:38:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:25.930 00:38:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71053 /var/tmp/spdk-nbd.sock 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71053 ']' 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.930 00:38:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.930 00:38:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.930 Malloc0 00:06:25.930 00:38:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.191 Malloc1 00:06:26.191 00:38:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.191 00:38:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.453 /dev/nbd0 00:06:26.453 00:38:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.453 00:38:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.453 1+0 records in 00:06:26.453 1+0 records out 00:06:26.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191487 s, 21.4 MB/s 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.453 00:38:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.453 00:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.453 00:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.453 00:38:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.715 /dev/nbd1 00:06:26.715 00:38:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.715 00:38:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.715 1+0 records in 00:06:26.715 1+0 records out 00:06:26.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253792 s, 16.1 MB/s 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.715 00:38:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.715 00:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.715 00:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.715 00:38:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.715 00:38:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.715 00:38:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.976 { 00:06:26.976 "nbd_device": "/dev/nbd0", 00:06:26.976 "bdev_name": "Malloc0" 00:06:26.976 }, 00:06:26.976 { 00:06:26.976 "nbd_device": "/dev/nbd1", 00:06:26.976 "bdev_name": "Malloc1" 00:06:26.976 } 00:06:26.976 ]' 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.976 { 00:06:26.976 "nbd_device": "/dev/nbd0", 00:06:26.976 "bdev_name": "Malloc0" 00:06:26.976 }, 00:06:26.976 { 00:06:26.976 "nbd_device": "/dev/nbd1", 00:06:26.976 "bdev_name": "Malloc1" 00:06:26.976 } 00:06:26.976 ]' 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.976 /dev/nbd1' 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.976 /dev/nbd1' 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.976 256+0 records in 00:06:26.976 256+0 records out 00:06:26.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00719667 s, 146 MB/s 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.976 256+0 records in 00:06:26.976 256+0 records out 00:06:26.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160008 s, 65.5 MB/s 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.976 256+0 records in 00:06:26.976 256+0 records out 00:06:26.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195008 s, 53.8 MB/s 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.976 00:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.977 00:38:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.238 00:38:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.500 00:38:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.760 00:38:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.760 00:38:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.021 00:38:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.021 [2024-11-17 00:38:19.971106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.021 [2024-11-17 00:38:20.009795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.021 [2024-11-17 00:38:20.009939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.021 [2024-11-17 00:38:20.051336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.021 [2024-11-17 00:38:20.051393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.322 00:38:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71053 /var/tmp/spdk-nbd.sock 00:06:31.322 00:38:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71053 ']' 00:06:31.322 00:38:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.322 00:38:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.322 00:38:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.322 00:38:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.322 00:38:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:31.322 00:38:23 event.app_repeat -- event/event.sh@39 -- # killprocess 71053 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71053 ']' 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71053 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71053 00:06:31.322 killing process with pid 71053 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71053' 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71053 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71053 00:06:31.322 spdk_app_start is called in Round 0. 00:06:31.322 Shutdown signal received, stop current app iteration 00:06:31.322 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:31.322 spdk_app_start is called in Round 1. 00:06:31.322 Shutdown signal received, stop current app iteration 00:06:31.322 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:31.322 spdk_app_start is called in Round 2. 00:06:31.322 Shutdown signal received, stop current app iteration 00:06:31.322 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:31.322 spdk_app_start is called in Round 3. 00:06:31.322 Shutdown signal received, stop current app iteration 00:06:31.322 ************************************ 00:06:31.322 END TEST app_repeat 00:06:31.322 ************************************ 00:06:31.322 00:38:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:31.322 00:38:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:31.322 00:06:31.322 real 0m17.271s 00:06:31.322 user 0m38.248s 00:06:31.322 sys 0m2.339s 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.322 00:38:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.322 00:38:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:31.322 00:38:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:31.322 00:38:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.322 00:38:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.322 00:38:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.322 ************************************ 00:06:31.322 START TEST cpu_locks 00:06:31.322 ************************************ 00:06:31.322 00:38:23 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:31.322 * Looking for test storage... 00:06:31.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:31.322 00:38:23 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.322 00:38:23 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.322 00:38:23 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.583 00:38:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.583 --rc genhtml_branch_coverage=1 00:06:31.583 --rc genhtml_function_coverage=1 00:06:31.583 --rc genhtml_legend=1 00:06:31.583 --rc geninfo_all_blocks=1 00:06:31.583 --rc geninfo_unexecuted_blocks=1 00:06:31.583 00:06:31.583 ' 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.583 --rc genhtml_branch_coverage=1 00:06:31.583 --rc genhtml_function_coverage=1 00:06:31.583 --rc genhtml_legend=1 00:06:31.583 --rc geninfo_all_blocks=1 00:06:31.583 --rc geninfo_unexecuted_blocks=1 00:06:31.583 00:06:31.583 ' 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.583 --rc genhtml_branch_coverage=1 00:06:31.583 --rc genhtml_function_coverage=1 00:06:31.583 --rc genhtml_legend=1 00:06:31.583 --rc geninfo_all_blocks=1 00:06:31.583 --rc geninfo_unexecuted_blocks=1 00:06:31.583 00:06:31.583 ' 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.583 --rc genhtml_branch_coverage=1 00:06:31.583 --rc genhtml_function_coverage=1 00:06:31.583 --rc genhtml_legend=1 00:06:31.583 --rc geninfo_all_blocks=1 00:06:31.583 --rc geninfo_unexecuted_blocks=1 00:06:31.583 00:06:31.583 ' 00:06:31.583 00:38:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:31.583 00:38:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:31.583 00:38:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:31.583 00:38:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.583 00:38:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.583 ************************************ 00:06:31.583 START TEST default_locks 00:06:31.583 ************************************ 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71478 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71478 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71478 ']' 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.583 00:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.583 [2024-11-17 00:38:23.537054] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.583 [2024-11-17 00:38:23.537433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71478 ] 00:06:31.842 [2024-11-17 00:38:23.681074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.842 [2024-11-17 00:38:23.723503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.412 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.412 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:32.412 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71478 00:06:32.412 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71478 00:06:32.412 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71478 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71478 ']' 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71478 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71478 00:06:32.673 killing process with pid 71478 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71478' 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71478 00:06:32.673 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71478 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71478 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71478 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71478 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71478 ']' 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.933 ERROR: process (pid: 71478) is no longer running 00:06:32.933 ************************************ 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71478) - No such process 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.933 00:06:32.933 real 0m1.493s 00:06:32.933 user 0m1.458s 00:06:32.933 sys 0m0.470s 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.933 00:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.933 END TEST default_locks 00:06:32.933 ************************************ 00:06:33.194 00:38:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:33.194 00:38:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.194 00:38:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.194 00:38:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.194 ************************************ 00:06:33.194 START TEST default_locks_via_rpc 00:06:33.194 ************************************ 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71526 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71526 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71526 ']' 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.194 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.194 [2024-11-17 00:38:25.102165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.194 [2024-11-17 00:38:25.102858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71526 ] 00:06:33.194 [2024-11-17 00:38:25.241152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.455 [2024-11-17 00:38:25.282664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71526 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71526 00:06:34.028 00:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71526 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71526 ']' 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71526 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71526 00:06:34.289 killing process with pid 71526 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71526' 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71526 00:06:34.289 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71526 00:06:34.551 00:06:34.551 real 0m1.413s 00:06:34.551 user 0m1.369s 00:06:34.551 sys 0m0.448s 00:06:34.551 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.551 ************************************ 00:06:34.551 END TEST default_locks_via_rpc 00:06:34.551 ************************************ 00:06:34.551 00:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.551 00:38:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:34.551 00:38:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.551 00:38:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.551 00:38:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.551 ************************************ 00:06:34.551 START TEST non_locking_app_on_locked_coremask 00:06:34.551 ************************************ 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71572 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71572 /var/tmp/spdk.sock 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71572 ']' 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.551 00:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.551 [2024-11-17 00:38:26.573298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.551 [2024-11-17 00:38:26.573441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71572 ] 00:06:34.812 [2024-11-17 00:38:26.719670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.812 [2024-11-17 00:38:26.759700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71583 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71583 /var/tmp/spdk2.sock 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71583 ']' 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.382 00:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.640 [2024-11-17 00:38:27.465950] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.640 [2024-11-17 00:38:27.466667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71583 ] 00:06:35.640 [2024-11-17 00:38:27.612450] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.640 [2024-11-17 00:38:27.612490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.640 [2024-11-17 00:38:27.692398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71572 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71572 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71572 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71572 ']' 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71572 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.575 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71572 00:06:36.833 killing process with pid 71572 00:06:36.833 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.833 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.833 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71572' 00:06:36.833 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71572 00:06:36.833 00:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71572 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71583 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71583 ']' 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71583 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71583 00:06:37.400 killing process with pid 71583 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71583' 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71583 00:06:37.400 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71583 00:06:37.657 00:06:37.657 real 0m3.088s 00:06:37.657 user 0m3.306s 00:06:37.657 sys 0m0.866s 00:06:37.657 ************************************ 00:06:37.657 END TEST non_locking_app_on_locked_coremask 00:06:37.657 ************************************ 00:06:37.657 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.657 00:38:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.657 00:38:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:37.657 00:38:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.657 00:38:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.658 00:38:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.658 ************************************ 00:06:37.658 START TEST locking_app_on_unlocked_coremask 00:06:37.658 ************************************ 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71646 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71646 /var/tmp/spdk.sock 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71646 ']' 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.658 00:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:37.658 [2024-11-17 00:38:29.690083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.658 [2024-11-17 00:38:29.690894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:06:37.916 [2024-11-17 00:38:29.836322] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.916 [2024-11-17 00:38:29.836376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.916 [2024-11-17 00:38:29.876050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71657 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71657 /var/tmp/spdk2.sock 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71657 ']' 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.483 00:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.741 [2024-11-17 00:38:30.597278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.741 [2024-11-17 00:38:30.597627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71657 ] 00:06:38.741 [2024-11-17 00:38:30.743578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.999 [2024-11-17 00:38:30.825407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.566 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.566 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.566 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71657 00:06:39.566 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71657 00:06:39.566 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71646 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71646 ']' 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71646 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71646 00:06:39.824 killing process with pid 71646 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71646' 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71646 00:06:39.824 00:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71646 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71657 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71657 ']' 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71657 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71657 00:06:40.759 killing process with pid 71657 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71657' 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71657 00:06:40.759 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71657 00:06:41.017 ************************************ 00:06:41.017 END TEST locking_app_on_unlocked_coremask 00:06:41.017 ************************************ 00:06:41.017 00:06:41.017 real 0m3.201s 00:06:41.017 user 0m3.436s 00:06:41.017 sys 0m0.905s 00:06:41.017 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.017 00:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.017 00:38:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.017 00:38:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.017 00:38:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.017 00:38:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.017 ************************************ 00:06:41.017 START TEST locking_app_on_locked_coremask 00:06:41.017 ************************************ 00:06:41.017 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:41.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.017 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71726 00:06:41.017 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71726 /var/tmp/spdk.sock 00:06:41.017 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71726 ']' 00:06:41.018 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.018 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.018 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.018 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.018 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.018 00:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.018 [2024-11-17 00:38:32.929584] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:41.018 [2024-11-17 00:38:32.929863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71726 ] 00:06:41.018 [2024-11-17 00:38:33.072348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.276 [2024-11-17 00:38:33.112289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71736 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71736 /var/tmp/spdk2.sock 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71736 /var/tmp/spdk2.sock 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:41.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71736 /var/tmp/spdk2.sock 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71736 ']' 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.878 00:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.878 [2024-11-17 00:38:33.843340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:41.878 [2024-11-17 00:38:33.843471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71736 ] 00:06:42.159 [2024-11-17 00:38:33.989154] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71726 has claimed it. 00:06:42.159 [2024-11-17 00:38:33.989207] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.725 ERROR: process (pid: 71736) is no longer running 00:06:42.725 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71736) - No such process 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71726 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71726 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71726 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71726 ']' 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71726 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71726 00:06:42.725 killing process with pid 71726 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71726' 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71726 00:06:42.725 00:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71726 00:06:42.983 00:06:42.983 real 0m2.175s 00:06:42.983 user 0m2.384s 00:06:42.983 sys 0m0.566s 00:06:42.983 00:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.983 ************************************ 00:06:42.983 END TEST locking_app_on_locked_coremask 00:06:42.983 ************************************ 00:06:42.983 00:38:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.242 00:38:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:43.242 00:38:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.242 00:38:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.242 00:38:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.242 ************************************ 00:06:43.242 START TEST locking_overlapped_coremask 00:06:43.242 ************************************ 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71784 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71784 /var/tmp/spdk.sock 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71784 ']' 00:06:43.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.242 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.242 [2024-11-17 00:38:35.152689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.242 [2024-11-17 00:38:35.152811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71784 ] 00:06:43.242 [2024-11-17 00:38:35.299943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.499 [2024-11-17 00:38:35.340413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.499 [2024-11-17 00:38:35.340652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.499 [2024-11-17 00:38:35.340664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71802 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71802 /var/tmp/spdk2.sock 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71802 /var/tmp/spdk2.sock 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71802 /var/tmp/spdk2.sock 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71802 ']' 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.065 00:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.065 [2024-11-17 00:38:36.057955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:44.065 [2024-11-17 00:38:36.058230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71802 ] 00:06:44.323 [2024-11-17 00:38:36.211949] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71784 has claimed it. 00:06:44.323 [2024-11-17 00:38:36.212007] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.888 ERROR: process (pid: 71802) is no longer running 00:06:44.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71802) - No such process 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71784 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71784 ']' 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71784 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71784 00:06:44.888 killing process with pid 71784 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71784' 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71784 00:06:44.888 00:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71784 00:06:45.147 00:06:45.147 real 0m1.962s 00:06:45.147 user 0m5.328s 00:06:45.147 sys 0m0.425s 00:06:45.147 ************************************ 00:06:45.147 END TEST locking_overlapped_coremask 00:06:45.147 ************************************ 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.147 00:38:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:45.147 00:38:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.147 00:38:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.147 00:38:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.147 ************************************ 00:06:45.147 START TEST locking_overlapped_coremask_via_rpc 00:06:45.147 ************************************ 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71844 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71844 /var/tmp/spdk.sock 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71844 ']' 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.147 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.147 [2024-11-17 00:38:37.161144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.147 [2024-11-17 00:38:37.161263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71844 ] 00:06:45.405 [2024-11-17 00:38:37.302675] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.405 [2024-11-17 00:38:37.302718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.405 [2024-11-17 00:38:37.344544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.405 [2024-11-17 00:38:37.344656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.405 [2024-11-17 00:38:37.344611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71862 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71862 /var/tmp/spdk2.sock 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71862 ']' 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.970 00:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.228 [2024-11-17 00:38:38.064257] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.228 [2024-11-17 00:38:38.064575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71862 ] 00:06:46.228 [2024-11-17 00:38:38.221007] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.228 [2024-11-17 00:38:38.221053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.485 [2024-11-17 00:38:38.291723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.485 [2024-11-17 00:38:38.291728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.486 [2024-11-17 00:38:38.291808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.051 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.051 [2024-11-17 00:38:38.879509] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71844 has claimed it. 00:06:47.052 request: 00:06:47.052 { 00:06:47.052 "method": "framework_enable_cpumask_locks", 00:06:47.052 "req_id": 1 00:06:47.052 } 00:06:47.052 Got JSON-RPC error response 00:06:47.052 response: 00:06:47.052 { 00:06:47.052 "code": -32603, 00:06:47.052 "message": "Failed to claim CPU core: 2" 00:06:47.052 } 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71844 /var/tmp/spdk.sock 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71844 ']' 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.052 00:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71862 /var/tmp/spdk2.sock 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71862 ']' 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.310 00:06:47.310 real 0m2.247s 00:06:47.310 user 0m1.027s 00:06:47.310 sys 0m0.135s 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.310 00:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.310 ************************************ 00:06:47.310 END TEST locking_overlapped_coremask_via_rpc 00:06:47.310 ************************************ 00:06:47.310 00:38:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.310 00:38:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71844 ]] 00:06:47.311 00:38:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71844 00:06:47.311 00:38:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71844 ']' 00:06:47.311 00:38:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71844 00:06:47.311 00:38:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:47.311 00:38:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.311 00:38:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71844 00:06:47.567 00:38:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.567 00:38:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.567 00:38:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71844' 00:06:47.567 killing process with pid 71844 00:06:47.567 00:38:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71844 00:06:47.567 00:38:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71844 00:06:47.825 00:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71862 ]] 00:06:47.825 00:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71862 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71862 ']' 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71862 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71862 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:47.825 killing process with pid 71862 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71862' 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71862 00:06:47.825 00:38:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71862 00:06:48.082 00:38:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.082 00:38:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:48.082 00:38:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71844 ]] 00:06:48.082 00:38:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71844 00:06:48.082 00:38:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71844 ']' 00:06:48.082 00:38:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71844 00:06:48.082 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71844) - No such process 00:06:48.082 Process with pid 71844 is not found 00:06:48.082 00:38:39 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71844 is not found' 00:06:48.083 00:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71862 ]] 00:06:48.083 00:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71862 00:06:48.083 00:38:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71862 ']' 00:06:48.083 00:38:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71862 00:06:48.083 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71862) - No such process 00:06:48.083 Process with pid 71862 is not found 00:06:48.083 00:38:39 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71862 is not found' 00:06:48.083 00:38:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.083 00:06:48.083 real 0m16.700s 00:06:48.083 user 0m28.679s 00:06:48.083 sys 0m4.607s 00:06:48.083 00:38:39 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.083 00:38:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.083 ************************************ 00:06:48.083 END TEST cpu_locks 00:06:48.083 ************************************ 00:06:48.083 00:06:48.083 real 0m43.691s 00:06:48.083 user 1m23.783s 00:06:48.083 sys 0m7.940s 00:06:48.083 00:38:40 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.083 ************************************ 00:06:48.083 END TEST event 00:06:48.083 ************************************ 00:06:48.083 00:38:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.083 00:38:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:48.083 00:38:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.083 00:38:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.083 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.083 ************************************ 00:06:48.083 START TEST thread 00:06:48.083 ************************************ 00:06:48.083 00:38:40 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:48.341 * Looking for test storage... 00:06:48.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:48.341 00:38:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.341 00:38:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.341 00:38:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.341 00:38:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.341 00:38:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.341 00:38:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.341 00:38:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.341 00:38:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.341 00:38:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.341 00:38:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.341 00:38:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.341 00:38:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:48.341 00:38:40 thread -- scripts/common.sh@345 -- # : 1 00:06:48.341 00:38:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.341 00:38:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.341 00:38:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:48.341 00:38:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:48.341 00:38:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.341 00:38:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:48.341 00:38:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.341 00:38:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:48.341 00:38:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:48.341 00:38:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.341 00:38:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:48.341 00:38:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.341 00:38:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.341 00:38:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.341 00:38:40 thread -- scripts/common.sh@368 -- # return 0 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.341 --rc genhtml_branch_coverage=1 00:06:48.341 --rc genhtml_function_coverage=1 00:06:48.341 --rc genhtml_legend=1 00:06:48.341 --rc geninfo_all_blocks=1 00:06:48.341 --rc geninfo_unexecuted_blocks=1 00:06:48.341 00:06:48.341 ' 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.341 --rc genhtml_branch_coverage=1 00:06:48.341 --rc genhtml_function_coverage=1 00:06:48.341 --rc genhtml_legend=1 00:06:48.341 --rc geninfo_all_blocks=1 00:06:48.341 --rc geninfo_unexecuted_blocks=1 00:06:48.341 00:06:48.341 ' 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.341 --rc genhtml_branch_coverage=1 00:06:48.341 --rc genhtml_function_coverage=1 00:06:48.341 --rc genhtml_legend=1 00:06:48.341 --rc geninfo_all_blocks=1 00:06:48.341 --rc geninfo_unexecuted_blocks=1 00:06:48.341 00:06:48.341 ' 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.341 --rc genhtml_branch_coverage=1 00:06:48.341 --rc genhtml_function_coverage=1 00:06:48.341 --rc genhtml_legend=1 00:06:48.341 --rc geninfo_all_blocks=1 00:06:48.341 --rc geninfo_unexecuted_blocks=1 00:06:48.341 00:06:48.341 ' 00:06:48.341 00:38:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:48.341 00:38:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.342 00:38:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.342 ************************************ 00:06:48.342 START TEST thread_poller_perf 00:06:48.342 ************************************ 00:06:48.342 00:38:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.342 [2024-11-17 00:38:40.266165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.342 [2024-11-17 00:38:40.266268] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71989 ] 00:06:48.600 [2024-11-17 00:38:40.409446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.600 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:48.600 [2024-11-17 00:38:40.451612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.537 [2024-11-17T00:38:41.600Z] ====================================== 00:06:49.537 [2024-11-17T00:38:41.600Z] busy:2615199210 (cyc) 00:06:49.537 [2024-11-17T00:38:41.600Z] total_run_count: 307000 00:06:49.537 [2024-11-17T00:38:41.600Z] tsc_hz: 2600000000 (cyc) 00:06:49.537 [2024-11-17T00:38:41.600Z] ====================================== 00:06:49.537 [2024-11-17T00:38:41.600Z] poller_cost: 8518 (cyc), 3276 (nsec) 00:06:49.537 00:06:49.537 real 0m1.293s 00:06:49.537 user 0m1.115s 00:06:49.537 sys 0m0.072s 00:06:49.537 00:38:41 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.537 00:38:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.537 ************************************ 00:06:49.537 END TEST thread_poller_perf 00:06:49.537 ************************************ 00:06:49.537 00:38:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.537 00:38:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:49.537 00:38:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.537 00:38:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.537 ************************************ 00:06:49.537 START TEST thread_poller_perf 00:06:49.537 ************************************ 00:06:49.537 00:38:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.794 [2024-11-17 00:38:41.603467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.794 [2024-11-17 00:38:41.603581] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72020 ] 00:06:49.794 [2024-11-17 00:38:41.752599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.794 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.794 [2024-11-17 00:38:41.794315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.173 [2024-11-17T00:38:43.236Z] ====================================== 00:06:51.173 [2024-11-17T00:38:43.236Z] busy:2603368956 (cyc) 00:06:51.173 [2024-11-17T00:38:43.236Z] total_run_count: 3971000 00:06:51.173 [2024-11-17T00:38:43.236Z] tsc_hz: 2600000000 (cyc) 00:06:51.173 [2024-11-17T00:38:43.236Z] ====================================== 00:06:51.173 [2024-11-17T00:38:43.236Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:51.173 00:06:51.173 real 0m1.290s 00:06:51.173 user 0m1.113s 00:06:51.173 sys 0m0.071s 00:06:51.173 00:38:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.173 00:38:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.173 ************************************ 00:06:51.173 END TEST thread_poller_perf 00:06:51.173 ************************************ 00:06:51.173 00:38:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:51.173 00:06:51.173 real 0m2.814s 00:06:51.173 user 0m2.335s 00:06:51.173 sys 0m0.273s 00:06:51.173 00:38:42 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.173 00:38:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.173 ************************************ 00:06:51.173 END TEST thread 00:06:51.173 ************************************ 00:06:51.173 00:38:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:51.173 00:38:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:51.173 00:38:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.173 00:38:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.173 00:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:51.173 ************************************ 00:06:51.173 START TEST app_cmdline 00:06:51.173 ************************************ 00:06:51.173 00:38:42 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:51.173 * Looking for test storage... 00:06:51.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.173 00:38:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.173 --rc genhtml_branch_coverage=1 00:06:51.173 --rc genhtml_function_coverage=1 00:06:51.173 --rc genhtml_legend=1 00:06:51.173 --rc geninfo_all_blocks=1 00:06:51.173 --rc geninfo_unexecuted_blocks=1 00:06:51.173 00:06:51.173 ' 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.173 --rc genhtml_branch_coverage=1 00:06:51.173 --rc genhtml_function_coverage=1 00:06:51.173 --rc genhtml_legend=1 00:06:51.173 --rc geninfo_all_blocks=1 00:06:51.173 --rc geninfo_unexecuted_blocks=1 00:06:51.173 00:06:51.173 ' 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.173 --rc genhtml_branch_coverage=1 00:06:51.173 --rc genhtml_function_coverage=1 00:06:51.173 --rc genhtml_legend=1 00:06:51.173 --rc geninfo_all_blocks=1 00:06:51.173 --rc geninfo_unexecuted_blocks=1 00:06:51.173 00:06:51.173 ' 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.173 --rc genhtml_branch_coverage=1 00:06:51.173 --rc genhtml_function_coverage=1 00:06:51.173 --rc genhtml_legend=1 00:06:51.173 --rc geninfo_all_blocks=1 00:06:51.173 --rc geninfo_unexecuted_blocks=1 00:06:51.173 00:06:51.173 ' 00:06:51.173 00:38:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:51.173 00:38:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72109 00:06:51.173 00:38:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72109 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72109 ']' 00:06:51.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.173 00:38:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:51.173 00:38:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.173 [2024-11-17 00:38:43.147145] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.174 [2024-11-17 00:38:43.147574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72109 ] 00:06:51.431 [2024-11-17 00:38:43.293814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.431 [2024-11-17 00:38:43.336202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.997 00:38:43 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.997 00:38:43 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:51.997 00:38:43 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:52.256 { 00:06:52.256 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:52.256 "fields": { 00:06:52.256 "major": 24, 00:06:52.256 "minor": 9, 00:06:52.256 "patch": 1, 00:06:52.256 "suffix": "-pre", 00:06:52.256 "commit": "b18e1bd62" 00:06:52.256 } 00:06:52.256 } 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:52.256 00:38:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:52.256 00:38:44 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.514 request: 00:06:52.514 { 00:06:52.514 "method": "env_dpdk_get_mem_stats", 00:06:52.514 "req_id": 1 00:06:52.514 } 00:06:52.514 Got JSON-RPC error response 00:06:52.514 response: 00:06:52.514 { 00:06:52.514 "code": -32601, 00:06:52.514 "message": "Method not found" 00:06:52.514 } 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.514 00:38:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72109 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72109 ']' 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72109 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72109 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.514 killing process with pid 72109 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72109' 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@969 -- # kill 72109 00:06:52.514 00:38:44 app_cmdline -- common/autotest_common.sh@974 -- # wait 72109 00:06:52.772 00:06:52.772 real 0m1.853s 00:06:52.772 user 0m2.157s 00:06:52.772 sys 0m0.444s 00:06:52.772 00:38:44 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.772 00:38:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.772 ************************************ 00:06:52.772 END TEST app_cmdline 00:06:52.772 ************************************ 00:06:52.772 00:38:44 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:52.772 00:38:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.772 00:38:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.772 00:38:44 -- common/autotest_common.sh@10 -- # set +x 00:06:52.772 ************************************ 00:06:52.772 START TEST version 00:06:52.772 ************************************ 00:06:52.772 00:38:44 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:53.031 * Looking for test storage... 00:06:53.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.031 00:38:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.031 00:38:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.031 00:38:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.031 00:38:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.031 00:38:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.031 00:38:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.031 00:38:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.031 00:38:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.031 00:38:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.031 00:38:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.031 00:38:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.031 00:38:44 version -- scripts/common.sh@344 -- # case "$op" in 00:06:53.031 00:38:44 version -- scripts/common.sh@345 -- # : 1 00:06:53.031 00:38:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.031 00:38:44 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.031 00:38:44 version -- scripts/common.sh@365 -- # decimal 1 00:06:53.031 00:38:44 version -- scripts/common.sh@353 -- # local d=1 00:06:53.031 00:38:44 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.031 00:38:44 version -- scripts/common.sh@355 -- # echo 1 00:06:53.031 00:38:44 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.031 00:38:44 version -- scripts/common.sh@366 -- # decimal 2 00:06:53.031 00:38:44 version -- scripts/common.sh@353 -- # local d=2 00:06:53.031 00:38:44 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.031 00:38:44 version -- scripts/common.sh@355 -- # echo 2 00:06:53.031 00:38:44 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.031 00:38:44 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.031 00:38:44 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.031 00:38:44 version -- scripts/common.sh@368 -- # return 0 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.031 --rc genhtml_branch_coverage=1 00:06:53.031 --rc genhtml_function_coverage=1 00:06:53.031 --rc genhtml_legend=1 00:06:53.031 --rc geninfo_all_blocks=1 00:06:53.031 --rc geninfo_unexecuted_blocks=1 00:06:53.031 00:06:53.031 ' 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.031 --rc genhtml_branch_coverage=1 00:06:53.031 --rc genhtml_function_coverage=1 00:06:53.031 --rc genhtml_legend=1 00:06:53.031 --rc geninfo_all_blocks=1 00:06:53.031 --rc geninfo_unexecuted_blocks=1 00:06:53.031 00:06:53.031 ' 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.031 --rc genhtml_branch_coverage=1 00:06:53.031 --rc genhtml_function_coverage=1 00:06:53.031 --rc genhtml_legend=1 00:06:53.031 --rc geninfo_all_blocks=1 00:06:53.031 --rc geninfo_unexecuted_blocks=1 00:06:53.031 00:06:53.031 ' 00:06:53.031 00:38:44 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.031 --rc genhtml_branch_coverage=1 00:06:53.031 --rc genhtml_function_coverage=1 00:06:53.031 --rc genhtml_legend=1 00:06:53.031 --rc geninfo_all_blocks=1 00:06:53.031 --rc geninfo_unexecuted_blocks=1 00:06:53.031 00:06:53.031 ' 00:06:53.031 00:38:44 version -- app/version.sh@17 -- # get_header_version major 00:06:53.031 00:38:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.031 00:38:44 version -- app/version.sh@17 -- # major=24 00:06:53.031 00:38:44 version -- app/version.sh@18 -- # get_header_version minor 00:06:53.031 00:38:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.031 00:38:44 version -- app/version.sh@18 -- # minor=9 00:06:53.031 00:38:44 version -- app/version.sh@19 -- # get_header_version patch 00:06:53.031 00:38:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.031 00:38:44 version -- app/version.sh@19 -- # patch=1 00:06:53.031 00:38:44 version -- app/version.sh@20 -- # get_header_version suffix 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # cut -f2 00:06:53.031 00:38:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.031 00:38:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.031 00:38:44 version -- app/version.sh@20 -- # suffix=-pre 00:06:53.031 00:38:44 version -- app/version.sh@22 -- # version=24.9 00:06:53.031 00:38:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:53.031 00:38:44 version -- app/version.sh@25 -- # version=24.9.1 00:06:53.031 00:38:44 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:53.031 00:38:44 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:53.031 00:38:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:53.031 00:38:45 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:53.031 00:38:45 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:53.031 00:06:53.031 real 0m0.196s 00:06:53.031 user 0m0.127s 00:06:53.031 sys 0m0.100s 00:06:53.031 00:38:45 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.031 00:38:45 version -- common/autotest_common.sh@10 -- # set +x 00:06:53.031 ************************************ 00:06:53.031 END TEST version 00:06:53.031 ************************************ 00:06:53.031 00:38:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:53.031 00:38:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:53.031 00:38:45 -- spdk/autotest.sh@194 -- # uname -s 00:06:53.031 00:38:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:53.031 00:38:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:53.031 00:38:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:53.031 00:38:45 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:53.031 00:38:45 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:53.031 00:38:45 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.032 00:38:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.032 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:06:53.032 ************************************ 00:06:53.032 START TEST blockdev_nvme 00:06:53.032 ************************************ 00:06:53.032 00:38:45 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:53.290 * Looking for test storage... 00:06:53.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.290 00:38:45 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.290 --rc genhtml_branch_coverage=1 00:06:53.290 --rc genhtml_function_coverage=1 00:06:53.290 --rc genhtml_legend=1 00:06:53.290 --rc geninfo_all_blocks=1 00:06:53.290 --rc geninfo_unexecuted_blocks=1 00:06:53.290 00:06:53.290 ' 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.290 --rc genhtml_branch_coverage=1 00:06:53.290 --rc genhtml_function_coverage=1 00:06:53.290 --rc genhtml_legend=1 00:06:53.290 --rc geninfo_all_blocks=1 00:06:53.290 --rc geninfo_unexecuted_blocks=1 00:06:53.290 00:06:53.290 ' 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.290 --rc genhtml_branch_coverage=1 00:06:53.290 --rc genhtml_function_coverage=1 00:06:53.290 --rc genhtml_legend=1 00:06:53.290 --rc geninfo_all_blocks=1 00:06:53.290 --rc geninfo_unexecuted_blocks=1 00:06:53.290 00:06:53.290 ' 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.290 --rc genhtml_branch_coverage=1 00:06:53.290 --rc genhtml_function_coverage=1 00:06:53.290 --rc genhtml_legend=1 00:06:53.290 --rc geninfo_all_blocks=1 00:06:53.290 --rc geninfo_unexecuted_blocks=1 00:06:53.290 00:06:53.290 ' 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:53.290 00:38:45 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72270 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72270 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72270 ']' 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.290 00:38:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.290 00:38:45 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:53.290 [2024-11-17 00:38:45.287154] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.290 [2024-11-17 00:38:45.287268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72270 ] 00:06:53.548 [2024-11-17 00:38:45.434312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.548 [2024-11-17 00:38:45.476637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.115 00:38:46 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.115 00:38:46 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:54.115 00:38:46 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:54.115 00:38:46 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:54.115 00:38:46 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:54.115 00:38:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:54.115 00:38:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.115 00:38:46 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:54.115 00:38:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.115 00:38:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:54.375 00:38:46 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.375 00:38:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.634 00:38:46 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.635 00:38:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:54.635 00:38:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:54.635 00:38:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a17ad7d4-738b-4c77-9d0f-18d224ab5b17"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a17ad7d4-738b-4c77-9d0f-18d224ab5b17",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f3cc5ec9-dc6c-4c04-82ab-44a31b54445e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f3cc5ec9-dc6c-4c04-82ab-44a31b54445e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "92ee2144-ce30-44eb-bee2-c6bb05f49778"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92ee2144-ce30-44eb-bee2-c6bb05f49778",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "93da6ae2-8d57-4607-ab32-df3b22f3ca80"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "93da6ae2-8d57-4607-ab32-df3b22f3ca80",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0d2f0f52-3843-4df7-bc0f-7812cb6a610f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d2f0f52-3843-4df7-bc0f-7812cb6a610f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "63621982-f198-485c-96d8-07c3d965f6b5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "63621982-f198-485c-96d8-07c3d965f6b5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:54.635 00:38:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:54.635 00:38:46 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:54.635 00:38:46 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:54.635 00:38:46 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72270 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72270 ']' 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72270 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72270 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.635 killing process with pid 72270 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72270' 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72270 00:06:54.635 00:38:46 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72270 00:06:55.202 00:38:46 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:55.202 00:38:46 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.202 00:38:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:55.203 00:38:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.203 00:38:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.203 ************************************ 00:06:55.203 START TEST bdev_hello_world 00:06:55.203 ************************************ 00:06:55.203 00:38:46 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.203 [2024-11-17 00:38:47.044691] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.203 [2024-11-17 00:38:47.044823] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72332 ] 00:06:55.203 [2024-11-17 00:38:47.193481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.203 [2024-11-17 00:38:47.236370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.775 [2024-11-17 00:38:47.623158] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:55.775 [2024-11-17 00:38:47.623240] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:55.775 [2024-11-17 00:38:47.623270] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:55.775 [2024-11-17 00:38:47.625926] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:55.775 [2024-11-17 00:38:47.627333] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:55.775 [2024-11-17 00:38:47.627413] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:55.775 [2024-11-17 00:38:47.628060] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:55.775 00:06:55.775 [2024-11-17 00:38:47.628103] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:56.037 00:06:56.037 real 0m0.956s 00:06:56.037 user 0m0.651s 00:06:56.037 sys 0m0.198s 00:06:56.037 00:38:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.037 ************************************ 00:06:56.037 END TEST bdev_hello_world 00:06:56.037 ************************************ 00:06:56.037 00:38:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:56.037 00:38:47 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:56.037 00:38:47 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.037 00:38:47 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.037 00:38:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.037 ************************************ 00:06:56.037 START TEST bdev_bounds 00:06:56.037 ************************************ 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72363 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.037 Process bdevio pid: 72363 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72363' 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72363 00:06:56.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72363 ']' 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.037 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:56.037 [2024-11-17 00:38:48.082705] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:56.037 [2024-11-17 00:38:48.082869] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72363 ] 00:06:56.300 [2024-11-17 00:38:48.235063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.300 [2024-11-17 00:38:48.310614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.300 [2024-11-17 00:38:48.310954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.300 [2024-11-17 00:38:48.310995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.253 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.253 00:38:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:57.253 00:38:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:57.253 I/O targets: 00:06:57.253 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:57.253 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:57.253 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.253 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.253 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.253 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:57.253 00:06:57.253 00:06:57.253 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.253 http://cunit.sourceforge.net/ 00:06:57.253 00:06:57.253 00:06:57.253 Suite: bdevio tests on: Nvme3n1 00:06:57.253 Test: blockdev write read block ...passed 00:06:57.253 Test: blockdev write zeroes read block ...passed 00:06:57.253 Test: blockdev write zeroes read no split ...passed 00:06:57.253 Test: blockdev write zeroes read split ...passed 00:06:57.253 Test: blockdev write zeroes read split partial ...passed 00:06:57.253 Test: blockdev reset ...[2024-11-17 00:38:49.060170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:57.253 [2024-11-17 00:38:49.064140] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.253 passed 00:06:57.253 Test: blockdev write read 8 blocks ...passed 00:06:57.253 Test: blockdev write read size > 128k ...passed 00:06:57.253 Test: blockdev write read invalid size ...passed 00:06:57.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.253 Test: blockdev write read max offset ...passed 00:06:57.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.253 Test: blockdev writev readv 8 blocks ...passed 00:06:57.253 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.253 Test: blockdev writev readv block ...passed 00:06:57.253 Test: blockdev writev readv size > 128k ...passed 00:06:57.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.253 Test: blockdev comparev and writev ...[2024-11-17 00:38:49.077074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2c06000 len:0x1000 00:06:57.253 [2024-11-17 00:38:49.077187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme passthru rw ...passed 00:06:57.253 Test: blockdev nvme passthru vendor specific ...[2024-11-17 00:38:49.079085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.253 [2024-11-17 00:38:49.079157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme admin passthru ...passed 00:06:57.253 Test: blockdev copy ...passed 00:06:57.253 Suite: bdevio tests on: Nvme2n3 00:06:57.253 Test: blockdev write read block ...passed 00:06:57.253 Test: blockdev write zeroes read block ...passed 00:06:57.253 Test: blockdev write zeroes read no split ...passed 00:06:57.253 Test: blockdev write zeroes read split ...passed 00:06:57.253 Test: blockdev write zeroes read split partial ...passed 00:06:57.253 Test: blockdev reset ...[2024-11-17 00:38:49.098163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:57.253 [2024-11-17 00:38:49.101883] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.253 passed 00:06:57.253 Test: blockdev write read 8 blocks ...passed 00:06:57.253 Test: blockdev write read size > 128k ...passed 00:06:57.253 Test: blockdev write read invalid size ...passed 00:06:57.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.253 Test: blockdev write read max offset ...passed 00:06:57.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.253 Test: blockdev writev readv 8 blocks ...passed 00:06:57.253 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.253 Test: blockdev writev readv block ...passed 00:06:57.253 Test: blockdev writev readv size > 128k ...passed 00:06:57.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.253 Test: blockdev comparev and writev ...[2024-11-17 00:38:49.114377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e5805000 len:0x1000 00:06:57.253 [2024-11-17 00:38:49.114424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme passthru rw ...passed 00:06:57.253 Test: blockdev nvme passthru vendor specific ...[2024-11-17 00:38:49.115993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.253 [2024-11-17 00:38:49.116022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme admin passthru ...passed 00:06:57.253 Test: blockdev copy ...passed 00:06:57.253 Suite: bdevio tests on: Nvme2n2 00:06:57.253 Test: blockdev write read block ...passed 00:06:57.253 Test: blockdev write zeroes read block ...passed 00:06:57.253 Test: blockdev write zeroes read no split ...passed 00:06:57.253 Test: blockdev write zeroes read split ...passed 00:06:57.253 Test: blockdev write zeroes read split partial ...passed 00:06:57.253 Test: blockdev reset ...[2024-11-17 00:38:49.135050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:57.253 [2024-11-17 00:38:49.138048] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.253 passed 00:06:57.253 Test: blockdev write read 8 blocks ...passed 00:06:57.253 Test: blockdev write read size > 128k ...passed 00:06:57.253 Test: blockdev write read invalid size ...passed 00:06:57.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.253 Test: blockdev write read max offset ...passed 00:06:57.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.253 Test: blockdev writev readv 8 blocks ...passed 00:06:57.253 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.253 Test: blockdev writev readv block ...passed 00:06:57.253 Test: blockdev writev readv size > 128k ...passed 00:06:57.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.253 Test: blockdev comparev and writev ...[2024-11-17 00:38:49.150386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e5c36000 len:0x1000 00:06:57.253 [2024-11-17 00:38:49.150429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme passthru rw ...passed 00:06:57.253 Test: blockdev nvme passthru vendor specific ...[2024-11-17 00:38:49.151902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.253 [2024-11-17 00:38:49.151930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme admin passthru ...passed 00:06:57.253 Test: blockdev copy ...passed 00:06:57.253 Suite: bdevio tests on: Nvme2n1 00:06:57.253 Test: blockdev write read block ...passed 00:06:57.253 Test: blockdev write zeroes read block ...passed 00:06:57.253 Test: blockdev write zeroes read no split ...passed 00:06:57.253 Test: blockdev write zeroes read split ...passed 00:06:57.253 Test: blockdev write zeroes read split partial ...passed 00:06:57.253 Test: blockdev reset ...[2024-11-17 00:38:49.172253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:57.253 [2024-11-17 00:38:49.174974] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.253 passed 00:06:57.253 Test: blockdev write read 8 blocks ...passed 00:06:57.253 Test: blockdev write read size > 128k ...passed 00:06:57.253 Test: blockdev write read invalid size ...passed 00:06:57.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.253 Test: blockdev write read max offset ...passed 00:06:57.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.253 Test: blockdev writev readv 8 blocks ...passed 00:06:57.253 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.253 Test: blockdev writev readv block ...passed 00:06:57.253 Test: blockdev writev readv size > 128k ...passed 00:06:57.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.253 Test: blockdev comparev and writev ...[2024-11-17 00:38:49.185778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e5c30000 len:0x1000 00:06:57.253 [2024-11-17 00:38:49.185824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme passthru rw ...passed 00:06:57.253 Test: blockdev nvme passthru vendor specific ...[2024-11-17 00:38:49.187350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.253 [2024-11-17 00:38:49.187396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.253 passed 00:06:57.253 Test: blockdev nvme admin passthru ...passed 00:06:57.253 Test: blockdev copy ...passed 00:06:57.253 Suite: bdevio tests on: Nvme1n1 00:06:57.253 Test: blockdev write read block ...passed 00:06:57.253 Test: blockdev write zeroes read block ...passed 00:06:57.253 Test: blockdev write zeroes read no split ...passed 00:06:57.253 Test: blockdev write zeroes read split ...passed 00:06:57.253 Test: blockdev write zeroes read split partial ...passed 00:06:57.253 Test: blockdev reset ...[2024-11-17 00:38:49.209381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:57.254 [2024-11-17 00:38:49.212190] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.254 passed 00:06:57.254 Test: blockdev write read 8 blocks ...passed 00:06:57.254 Test: blockdev write read size > 128k ...passed 00:06:57.254 Test: blockdev write read invalid size ...passed 00:06:57.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.254 Test: blockdev write read max offset ...passed 00:06:57.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.254 Test: blockdev writev readv 8 blocks ...passed 00:06:57.254 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.254 Test: blockdev writev readv block ...passed 00:06:57.254 Test: blockdev writev readv size > 128k ...passed 00:06:57.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.254 Test: blockdev comparev and writev ...[2024-11-17 00:38:49.223313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e5c2c000 len:0x1000 00:06:57.254 [2024-11-17 00:38:49.223352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.254 passed 00:06:57.254 Test: blockdev nvme passthru rw ...passed 00:06:57.254 Test: blockdev nvme passthru vendor specific ...[2024-11-17 00:38:49.225188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.254 [2024-11-17 00:38:49.225222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.254 passed 00:06:57.254 Test: blockdev nvme admin passthru ...passed 00:06:57.254 Test: blockdev copy ...passed 00:06:57.254 Suite: bdevio tests on: Nvme0n1 00:06:57.254 Test: blockdev write read block ...passed 00:06:57.254 Test: blockdev write zeroes read block ...passed 00:06:57.254 Test: blockdev write zeroes read no split ...passed 00:06:57.254 Test: blockdev write zeroes read split ...passed 00:06:57.254 Test: blockdev write zeroes read split partial ...passed 00:06:57.254 Test: blockdev reset ...[2024-11-17 00:38:49.245953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:57.254 [2024-11-17 00:38:49.248706] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.254 passed 00:06:57.254 Test: blockdev write read 8 blocks ...passed 00:06:57.254 Test: blockdev write read size > 128k ...passed 00:06:57.254 Test: blockdev write read invalid size ...passed 00:06:57.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.254 Test: blockdev write read max offset ...passed 00:06:57.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.254 Test: blockdev writev readv 8 blocks ...passed 00:06:57.254 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.254 Test: blockdev writev readv block ...passed 00:06:57.254 Test: blockdev writev readv size > 128k ...passed 00:06:57.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.254 Test: blockdev comparev and writev ...passed 00:06:57.254 Test: blockdev nvme passthru rw ...[2024-11-17 00:38:49.258950] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:57.254 separate metadata which is not supported yet. 00:06:57.254 passed 00:06:57.254 Test: blockdev nvme passthru vendor specific ...[2024-11-17 00:38:49.260072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:57.254 [2024-11-17 00:38:49.260108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:57.254 passed 00:06:57.254 Test: blockdev nvme admin passthru ...passed 00:06:57.254 Test: blockdev copy ...passed 00:06:57.254 00:06:57.254 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.254 suites 6 6 n/a 0 0 00:06:57.254 tests 138 138 138 0 0 00:06:57.254 asserts 893 893 893 0 n/a 00:06:57.254 00:06:57.254 Elapsed time = 0.516 seconds 00:06:57.254 0 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72363 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72363 ']' 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72363 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72363 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.254 killing process with pid 72363 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.254 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72363' 00:06:57.511 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72363 00:06:57.511 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72363 00:06:57.511 00:38:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:57.511 00:06:57.511 real 0m1.479s 00:06:57.511 user 0m3.506s 00:06:57.511 sys 0m0.380s 00:06:57.511 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.511 ************************************ 00:06:57.511 END TEST bdev_bounds 00:06:57.511 ************************************ 00:06:57.511 00:38:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:57.511 00:38:49 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:57.511 00:38:49 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:57.511 00:38:49 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.511 00:38:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.511 ************************************ 00:06:57.511 START TEST bdev_nbd 00:06:57.511 ************************************ 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72417 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72417 /var/tmp/spdk-nbd.sock 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72417 ']' 00:06:57.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.511 00:38:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.512 00:38:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:57.769 [2024-11-17 00:38:49.604273] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.769 [2024-11-17 00:38:49.604417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.769 [2024-11-17 00:38:49.755467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.769 [2024-11-17 00:38:49.799312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.420 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.420 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:58.420 00:38:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:58.420 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.420 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.420 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:58.420 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.421 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.679 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.680 1+0 records in 00:06:58.680 1+0 records out 00:06:58.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108297 s, 3.8 MB/s 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.680 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.937 1+0 records in 00:06:58.937 1+0 records out 00:06:58.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695178 s, 5.9 MB/s 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:58.937 00:38:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.193 1+0 records in 00:06:59.193 1+0 records out 00:06:59.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577029 s, 7.1 MB/s 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.193 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:59.449 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.450 1+0 records in 00:06:59.450 1+0 records out 00:06:59.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523194 s, 7.8 MB/s 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.450 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.706 1+0 records in 00:06:59.706 1+0 records out 00:06:59.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691329 s, 5.9 MB/s 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.706 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.963 1+0 records in 00:06:59.963 1+0 records out 00:06:59.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700739 s, 5.8 MB/s 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.963 00:38:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd0", 00:07:00.221 "bdev_name": "Nvme0n1" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd1", 00:07:00.221 "bdev_name": "Nvme1n1" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd2", 00:07:00.221 "bdev_name": "Nvme2n1" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd3", 00:07:00.221 "bdev_name": "Nvme2n2" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd4", 00:07:00.221 "bdev_name": "Nvme2n3" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd5", 00:07:00.221 "bdev_name": "Nvme3n1" 00:07:00.221 } 00:07:00.221 ]' 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd0", 00:07:00.221 "bdev_name": "Nvme0n1" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd1", 00:07:00.221 "bdev_name": "Nvme1n1" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd2", 00:07:00.221 "bdev_name": "Nvme2n1" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd3", 00:07:00.221 "bdev_name": "Nvme2n2" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd4", 00:07:00.221 "bdev_name": "Nvme2n3" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd5", 00:07:00.221 "bdev_name": "Nvme3n1" 00:07:00.221 } 00:07:00.221 ]' 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.221 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.481 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.741 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.000 00:38:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.260 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:01.522 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:01.522 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:01.522 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:01.522 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.523 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.523 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:01.523 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.523 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.523 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.523 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.523 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:01.784 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:02.045 /dev/nbd0 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.045 1+0 records in 00:07:02.045 1+0 records out 00:07:02.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00171107 s, 2.4 MB/s 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.045 00:38:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:02.306 /dev/nbd1 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.306 1+0 records in 00:07:02.306 1+0 records out 00:07:02.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564408 s, 7.3 MB/s 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.306 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:02.567 /dev/nbd10 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.567 1+0 records in 00:07:02.567 1+0 records out 00:07:02.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606047 s, 6.8 MB/s 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.567 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:02.829 /dev/nbd11 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.829 1+0 records in 00:07:02.829 1+0 records out 00:07:02.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000929099 s, 4.4 MB/s 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.829 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:02.829 /dev/nbd12 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.091 1+0 records in 00:07:03.091 1+0 records out 00:07:03.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914376 s, 4.5 MB/s 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:03.091 00:38:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:03.091 /dev/nbd13 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.091 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.091 1+0 records in 00:07:03.091 1+0 records out 00:07:03.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753117 s, 5.4 MB/s 00:07:03.092 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.092 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.092 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.092 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.092 00:38:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.092 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.354 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd0", 00:07:03.355 "bdev_name": "Nvme0n1" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd1", 00:07:03.355 "bdev_name": "Nvme1n1" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd10", 00:07:03.355 "bdev_name": "Nvme2n1" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd11", 00:07:03.355 "bdev_name": "Nvme2n2" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd12", 00:07:03.355 "bdev_name": "Nvme2n3" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd13", 00:07:03.355 "bdev_name": "Nvme3n1" 00:07:03.355 } 00:07:03.355 ]' 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd0", 00:07:03.355 "bdev_name": "Nvme0n1" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd1", 00:07:03.355 "bdev_name": "Nvme1n1" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd10", 00:07:03.355 "bdev_name": "Nvme2n1" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd11", 00:07:03.355 "bdev_name": "Nvme2n2" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd12", 00:07:03.355 "bdev_name": "Nvme2n3" 00:07:03.355 }, 00:07:03.355 { 00:07:03.355 "nbd_device": "/dev/nbd13", 00:07:03.355 "bdev_name": "Nvme3n1" 00:07:03.355 } 00:07:03.355 ]' 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.355 /dev/nbd1 00:07:03.355 /dev/nbd10 00:07:03.355 /dev/nbd11 00:07:03.355 /dev/nbd12 00:07:03.355 /dev/nbd13' 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.355 /dev/nbd1 00:07:03.355 /dev/nbd10 00:07:03.355 /dev/nbd11 00:07:03.355 /dev/nbd12 00:07:03.355 /dev/nbd13' 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:03.355 256+0 records in 00:07:03.355 256+0 records out 00:07:03.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617199 s, 170 MB/s 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.355 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.616 256+0 records in 00:07:03.616 256+0 records out 00:07:03.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124696 s, 8.4 MB/s 00:07:03.616 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.616 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.877 256+0 records in 00:07:03.877 256+0 records out 00:07:03.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163401 s, 6.4 MB/s 00:07:03.877 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.877 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:03.877 256+0 records in 00:07:03.877 256+0 records out 00:07:03.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188184 s, 5.6 MB/s 00:07:03.877 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.877 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:04.139 256+0 records in 00:07:04.139 256+0 records out 00:07:04.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0701275 s, 15.0 MB/s 00:07:04.139 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.139 00:38:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:04.139 256+0 records in 00:07:04.139 256+0 records out 00:07:04.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133325 s, 7.9 MB/s 00:07:04.139 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.139 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:04.400 256+0 records in 00:07:04.400 256+0 records out 00:07:04.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136641 s, 7.7 MB/s 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.400 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.661 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.662 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.922 00:38:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.182 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.443 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:05.704 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:05.965 malloc_lvol_verify 00:07:05.965 00:38:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:06.224 d776087d-246f-4d61-9de8-64d27047d528 00:07:06.224 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:06.484 b479343e-65d1-44c8-a098-6c2106254af5 00:07:06.484 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:06.744 /dev/nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:06.744 mke2fs 1.47.0 (5-Feb-2023) 00:07:06.744 Discarding device blocks: 0/4096 done 00:07:06.744 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:06.744 00:07:06.744 Allocating group tables: 0/1 done 00:07:06.744 Writing inode tables: 0/1 done 00:07:06.744 Creating journal (1024 blocks): done 00:07:06.744 Writing superblocks and filesystem accounting information: 0/1 done 00:07:06.744 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72417 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72417 ']' 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72417 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72417 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.744 killing process with pid 72417 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72417' 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72417 00:07:06.744 00:38:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72417 00:07:07.004 00:38:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:07.004 00:07:07.004 real 0m9.469s 00:07:07.004 user 0m13.435s 00:07:07.004 sys 0m3.229s 00:07:07.004 00:38:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.004 00:38:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.004 ************************************ 00:07:07.004 END TEST bdev_nbd 00:07:07.004 ************************************ 00:07:07.004 00:38:59 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:07.004 00:38:59 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:07.004 skipping fio tests on NVMe due to multi-ns failures. 00:07:07.004 00:38:59 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:07.004 00:38:59 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:07.004 00:38:59 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.004 00:38:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:07.004 00:38:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.004 00:38:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.265 ************************************ 00:07:07.265 START TEST bdev_verify 00:07:07.265 ************************************ 00:07:07.265 00:38:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.265 [2024-11-17 00:38:59.119839] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.265 [2024-11-17 00:38:59.119931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72785 ] 00:07:07.265 [2024-11-17 00:38:59.259850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.265 [2024-11-17 00:38:59.302585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.265 [2024-11-17 00:38:59.302723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.835 Running I/O for 5 seconds... 00:07:10.164 19328.00 IOPS, 75.50 MiB/s [2024-11-17T00:39:03.170Z] 19040.00 IOPS, 74.38 MiB/s [2024-11-17T00:39:04.113Z] 18986.67 IOPS, 74.17 MiB/s [2024-11-17T00:39:05.056Z] 18880.00 IOPS, 73.75 MiB/s [2024-11-17T00:39:05.056Z] 19072.00 IOPS, 74.50 MiB/s 00:07:12.993 Latency(us) 00:07:12.993 [2024-11-17T00:39:05.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.993 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x0 length 0xbd0bd 00:07:12.993 Nvme0n1 : 5.07 1576.54 6.16 0.00 0.00 80722.80 17946.78 78239.90 00:07:12.993 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:12.993 Nvme0n1 : 5.08 1562.38 6.10 0.00 0.00 81759.58 17845.96 77433.30 00:07:12.993 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x0 length 0xa0000 00:07:12.993 Nvme1n1 : 5.09 1583.48 6.19 0.00 0.00 80577.76 13208.02 72593.72 00:07:12.993 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0xa0000 length 0xa0000 00:07:12.993 Nvme1n1 : 5.08 1561.38 6.10 0.00 0.00 81652.50 21072.34 65334.35 00:07:12.993 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x0 length 0x80000 00:07:12.993 Nvme2n1 : 5.09 1583.03 6.18 0.00 0.00 80279.92 13510.50 63721.16 00:07:12.993 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x80000 length 0x80000 00:07:12.993 Nvme2n1 : 5.08 1560.92 6.10 0.00 0.00 81467.88 18753.38 61704.66 00:07:12.993 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x0 length 0x80000 00:07:12.993 Nvme2n2 : 5.10 1582.05 6.18 0.00 0.00 80136.25 14922.04 66140.95 00:07:12.993 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x80000 length 0x80000 00:07:12.993 Nvme2n2 : 5.09 1560.48 6.10 0.00 0.00 81302.77 17845.96 62914.56 00:07:12.993 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x0 length 0x80000 00:07:12.993 Nvme2n3 : 5.10 1581.63 6.18 0.00 0.00 80031.03 14720.39 65334.35 00:07:12.993 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x80000 length 0x80000 00:07:12.993 Nvme2n3 : 5.09 1560.02 6.09 0.00 0.00 81164.43 15123.69 64527.75 00:07:12.993 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x0 length 0x20000 00:07:12.993 Nvme3n1 : 5.10 1581.19 6.18 0.00 0.00 79923.55 13913.80 66140.95 00:07:12.993 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.993 Verification LBA range: start 0x20000 length 0x20000 00:07:12.993 Nvme3n1 : 5.09 1559.58 6.09 0.00 0.00 81026.08 12552.66 66140.95 00:07:12.993 [2024-11-17T00:39:05.056Z] =================================================================================================================== 00:07:12.993 [2024-11-17T00:39:05.056Z] Total : 18852.67 73.64 0.00 0.00 80832.65 12552.66 78239.90 00:07:14.381 00:07:14.381 real 0m6.956s 00:07:14.381 user 0m13.061s 00:07:14.381 sys 0m0.253s 00:07:14.381 00:39:06 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.381 ************************************ 00:07:14.381 END TEST bdev_verify 00:07:14.381 ************************************ 00:07:14.381 00:39:06 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.381 00:39:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:14.381 00:39:06 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:14.381 00:39:06 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.381 00:39:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.381 ************************************ 00:07:14.381 START TEST bdev_verify_big_io 00:07:14.381 ************************************ 00:07:14.381 00:39:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:14.381 [2024-11-17 00:39:06.178795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.381 [2024-11-17 00:39:06.178980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72878 ] 00:07:14.381 [2024-11-17 00:39:06.334794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.381 [2024-11-17 00:39:06.408209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.381 [2024-11-17 00:39:06.408257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.954 Running I/O for 5 seconds... 00:07:18.591 1742.00 IOPS, 108.88 MiB/s [2024-11-17T00:39:12.561Z] 2203.50 IOPS, 137.72 MiB/s [2024-11-17T00:39:12.821Z] 2393.00 IOPS, 149.56 MiB/s [2024-11-17T00:39:13.082Z] 2253.00 IOPS, 140.81 MiB/s 00:07:21.019 Latency(us) 00:07:21.019 [2024-11-17T00:39:13.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.019 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x0 length 0xbd0b 00:07:21.019 Nvme0n1 : 5.63 140.23 8.76 0.00 0.00 882569.55 35893.56 1000180.18 00:07:21.019 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:21.019 Nvme0n1 : 5.67 146.66 9.17 0.00 0.00 842296.59 31658.93 929199.66 00:07:21.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x0 length 0xa000 00:07:21.019 Nvme1n1 : 5.74 138.40 8.65 0.00 0.00 867671.15 46177.67 1555118.87 00:07:21.019 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0xa000 length 0xa000 00:07:21.019 Nvme1n1 : 5.68 146.25 9.14 0.00 0.00 822750.97 66140.95 796917.76 00:07:21.019 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x0 length 0x8000 00:07:21.019 Nvme2n1 : 5.75 137.50 8.59 0.00 0.00 842845.92 44564.48 1587382.74 00:07:21.019 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x8000 length 0x8000 00:07:21.019 Nvme2n1 : 5.77 151.51 9.47 0.00 0.00 779054.96 60898.07 809823.31 00:07:21.019 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x0 length 0x8000 00:07:21.019 Nvme2n2 : 5.75 145.47 9.09 0.00 0.00 773663.45 60494.77 1193763.45 00:07:21.019 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x8000 length 0x8000 00:07:21.019 Nvme2n2 : 5.77 151.13 9.45 0.00 0.00 757993.45 62511.26 819502.47 00:07:21.019 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x0 length 0x8000 00:07:21.019 Nvme2n3 : 5.84 157.60 9.85 0.00 0.00 696791.14 13006.38 1651910.50 00:07:21.019 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.019 Verification LBA range: start 0x8000 length 0x8000 00:07:21.019 Nvme2n3 : 5.81 157.76 9.86 0.00 0.00 709083.56 32465.53 832408.02 00:07:21.019 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.020 Verification LBA range: start 0x0 length 0x2000 00:07:21.020 Nvme3n1 : 5.92 217.51 13.59 0.00 0.00 493679.18 186.68 1013085.74 00:07:21.020 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.020 Verification LBA range: start 0x2000 length 0x2000 00:07:21.020 Nvme3n1 : 5.81 169.56 10.60 0.00 0.00 643558.16 2129.92 871124.68 00:07:21.020 [2024-11-17T00:39:13.083Z] =================================================================================================================== 00:07:21.020 [2024-11-17T00:39:13.083Z] Total : 1859.60 116.23 0.00 0.00 744377.30 186.68 1651910.50 00:07:22.405 00:07:22.405 real 0m8.068s 00:07:22.405 user 0m15.127s 00:07:22.405 sys 0m0.369s 00:07:22.405 00:39:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.405 ************************************ 00:07:22.405 END TEST bdev_verify_big_io 00:07:22.405 ************************************ 00:07:22.405 00:39:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:22.406 00:39:14 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.406 00:39:14 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:22.406 00:39:14 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.406 00:39:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.406 ************************************ 00:07:22.406 START TEST bdev_write_zeroes 00:07:22.406 ************************************ 00:07:22.406 00:39:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.406 [2024-11-17 00:39:14.319469] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.406 [2024-11-17 00:39:14.319641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72978 ] 00:07:22.667 [2024-11-17 00:39:14.476633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.667 [2024-11-17 00:39:14.551469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.239 Running I/O for 1 seconds... 00:07:25.128 34.00 IOPS, 0.13 MiB/s 00:07:25.128 Latency(us) 00:07:25.128 [2024-11-17T00:39:17.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.128 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.128 Nvme0n1 : 1.80 90.15 0.35 0.00 0.00 1340478.82 13208.02 1806777.11 00:07:25.128 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.128 Nvme1n1 : 1.54 83.30 0.33 0.00 0.00 1532534.15 1529307.77 1535760.54 00:07:25.128 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.128 Nvme2n1 : 1.53 83.48 0.33 0.00 0.00 1526081.38 1522854.99 1529307.77 00:07:25.128 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.128 Nvme2n2 : 1.53 83.41 0.33 0.00 0.00 1526081.38 1522854.99 1529307.77 00:07:25.128 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.128 Nvme2n3 : 1.54 83.23 0.33 0.00 0.00 1526081.38 1522854.99 1529307.77 00:07:25.128 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.128 Nvme3n1 : 1.54 83.16 0.32 0.00 0.00 1522854.99 1516402.22 1529307.77 00:07:25.128 [2024-11-17T00:39:17.191Z] =================================================================================================================== 00:07:25.128 [2024-11-17T00:39:17.191Z] Total : 506.72 1.98 0.00 0.00 1489105.52 13208.02 1806777.11 00:07:25.389 00:07:25.390 real 0m3.084s 00:07:25.390 user 0m2.649s 00:07:25.390 sys 0m0.311s 00:07:25.390 ************************************ 00:07:25.390 END TEST bdev_write_zeroes 00:07:25.390 ************************************ 00:07:25.390 00:39:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.390 00:39:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:25.390 00:39:17 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.390 00:39:17 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:25.390 00:39:17 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.390 00:39:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.390 ************************************ 00:07:25.390 START TEST bdev_json_nonenclosed 00:07:25.390 ************************************ 00:07:25.390 00:39:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.651 [2024-11-17 00:39:17.469781] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.651 [2024-11-17 00:39:17.469905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73033 ] 00:07:25.651 [2024-11-17 00:39:17.625731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.651 [2024-11-17 00:39:17.700143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.651 [2024-11-17 00:39:17.700290] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:25.651 [2024-11-17 00:39:17.700314] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.651 [2024-11-17 00:39:17.700328] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.913 00:07:25.913 real 0m0.442s 00:07:25.913 user 0m0.185s 00:07:25.913 sys 0m0.152s 00:07:25.913 00:39:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.913 00:39:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:25.913 ************************************ 00:07:25.913 END TEST bdev_json_nonenclosed 00:07:25.913 ************************************ 00:07:25.913 00:39:17 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.913 00:39:17 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:25.913 00:39:17 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.913 00:39:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.913 ************************************ 00:07:25.913 START TEST bdev_json_nonarray 00:07:25.913 ************************************ 00:07:25.913 00:39:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.175 [2024-11-17 00:39:17.975462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.175 [2024-11-17 00:39:17.975614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73054 ] 00:07:26.175 [2024-11-17 00:39:18.129995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.175 [2024-11-17 00:39:18.204489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.175 [2024-11-17 00:39:18.204644] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:26.175 [2024-11-17 00:39:18.204665] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.175 [2024-11-17 00:39:18.204685] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.437 00:07:26.437 real 0m0.436s 00:07:26.437 user 0m0.193s 00:07:26.437 sys 0m0.137s 00:07:26.437 00:39:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.437 ************************************ 00:07:26.437 END TEST bdev_json_nonarray 00:07:26.437 ************************************ 00:07:26.437 00:39:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:26.437 00:39:18 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:26.437 00:07:26.437 real 0m33.346s 00:07:26.437 user 0m50.742s 00:07:26.437 sys 0m5.836s 00:07:26.437 00:39:18 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.437 00:39:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.437 ************************************ 00:07:26.437 END TEST blockdev_nvme 00:07:26.437 ************************************ 00:07:26.437 00:39:18 -- spdk/autotest.sh@209 -- # uname -s 00:07:26.437 00:39:18 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:26.437 00:39:18 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.437 00:39:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.437 00:39:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.437 00:39:18 -- common/autotest_common.sh@10 -- # set +x 00:07:26.437 ************************************ 00:07:26.437 START TEST blockdev_nvme_gpt 00:07:26.437 ************************************ 00:07:26.437 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.699 * Looking for test storage... 00:07:26.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:26.699 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.699 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.699 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.699 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.699 00:39:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.700 --rc genhtml_branch_coverage=1 00:07:26.700 --rc genhtml_function_coverage=1 00:07:26.700 --rc genhtml_legend=1 00:07:26.700 --rc geninfo_all_blocks=1 00:07:26.700 --rc geninfo_unexecuted_blocks=1 00:07:26.700 00:07:26.700 ' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.700 --rc genhtml_branch_coverage=1 00:07:26.700 --rc genhtml_function_coverage=1 00:07:26.700 --rc genhtml_legend=1 00:07:26.700 --rc geninfo_all_blocks=1 00:07:26.700 --rc geninfo_unexecuted_blocks=1 00:07:26.700 00:07:26.700 ' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.700 --rc genhtml_branch_coverage=1 00:07:26.700 --rc genhtml_function_coverage=1 00:07:26.700 --rc genhtml_legend=1 00:07:26.700 --rc geninfo_all_blocks=1 00:07:26.700 --rc geninfo_unexecuted_blocks=1 00:07:26.700 00:07:26.700 ' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.700 --rc genhtml_branch_coverage=1 00:07:26.700 --rc genhtml_function_coverage=1 00:07:26.700 --rc genhtml_legend=1 00:07:26.700 --rc geninfo_all_blocks=1 00:07:26.700 --rc geninfo_unexecuted_blocks=1 00:07:26.700 00:07:26.700 ' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73138 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73138 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73138 ']' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.700 00:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.700 [2024-11-17 00:39:18.746450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.700 [2024-11-17 00:39:18.746780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73138 ] 00:07:26.962 [2024-11-17 00:39:18.899391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.962 [2024-11-17 00:39:18.974488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.544 00:39:19 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.544 00:39:19 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:27.544 00:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:27.544 00:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:27.544 00:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:28.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.117 Waiting for block devices as requested 00:07:28.117 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.379 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.379 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.640 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.937 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:33.937 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.937 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:33.938 BYT; 00:07:33.938 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:33.938 BYT; 00:07:33.938 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.938 00:39:25 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.938 00:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:34.881 The operation has completed successfully. 00:07:34.881 00:39:26 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:35.883 The operation has completed successfully. 00:07:35.883 00:39:27 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.715 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.715 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.715 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.715 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.715 00:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:36.715 00:39:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.715 00:39:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.715 [] 00:07:36.715 00:39:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.715 00:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:36.715 00:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:36.715 00:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:36.715 00:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:36.977 00:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:36.977 00:39:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.977 00:39:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.239 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:37.239 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:37.240 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3e61cfc5-a2d1-4993-911b-f2aebcc5e5bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3e61cfc5-a2d1-4993-911b-f2aebcc5e5bf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "80a5d955-f303-4386-8231-f4bef369d974"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80a5d955-f303-4386-8231-f4bef369d974",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "66cd8589-3629-4366-a7d3-258e472351f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66cd8589-3629-4366-a7d3-258e472351f5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6ea280db-1608-4d8d-977d-acfe209d16f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ea280db-1608-4d8d-977d-acfe209d16f1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4e0f80d3-b412-467b-8cfa-a058c9cffd3d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4e0f80d3-b412-467b-8cfa-a058c9cffd3d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:37.240 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:37.240 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:37.240 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:37.240 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73138 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73138 ']' 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73138 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73138 00:07:37.240 killing process with pid 73138 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73138' 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73138 00:07:37.240 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73138 00:07:37.501 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:37.501 00:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:37.501 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:37.501 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.501 00:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.763 ************************************ 00:07:37.763 START TEST bdev_hello_world 00:07:37.763 ************************************ 00:07:37.763 00:39:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:37.763 [2024-11-17 00:39:29.631030] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.763 [2024-11-17 00:39:29.631169] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73754 ] 00:07:37.763 [2024-11-17 00:39:29.779777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.763 [2024-11-17 00:39:29.823927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.335 [2024-11-17 00:39:30.199942] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:38.335 [2024-11-17 00:39:30.199994] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:38.335 [2024-11-17 00:39:30.200020] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:38.335 [2024-11-17 00:39:30.202208] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:38.335 [2024-11-17 00:39:30.202881] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:38.335 [2024-11-17 00:39:30.202909] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:38.335 [2024-11-17 00:39:30.203486] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:38.335 00:07:38.335 [2024-11-17 00:39:30.203517] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:38.596 00:07:38.596 real 0m0.877s 00:07:38.596 user 0m0.594s 00:07:38.596 sys 0m0.178s 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 ************************************ 00:07:38.596 END TEST bdev_hello_world 00:07:38.596 ************************************ 00:07:38.596 00:39:30 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:38.596 00:39:30 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.596 00:39:30 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.596 00:39:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 ************************************ 00:07:38.596 START TEST bdev_bounds 00:07:38.596 ************************************ 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:38.596 Process bdevio pid: 73780 00:07:38.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73780 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73780' 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73780 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73780 ']' 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.596 00:39:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 [2024-11-17 00:39:30.589835] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.596 [2024-11-17 00:39:30.589979] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73780 ] 00:07:38.858 [2024-11-17 00:39:30.743687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.858 [2024-11-17 00:39:30.819103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.858 [2024-11-17 00:39:30.819407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.858 [2024-11-17 00:39:30.819456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.430 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.430 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:39.430 00:39:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:39.692 I/O targets: 00:07:39.692 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:39.692 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:39.692 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:39.692 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:39.692 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:39.692 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:39.692 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:39.692 00:07:39.692 00:07:39.692 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.692 http://cunit.sourceforge.net/ 00:07:39.692 00:07:39.692 00:07:39.692 Suite: bdevio tests on: Nvme3n1 00:07:39.692 Test: blockdev write read block ...passed 00:07:39.692 Test: blockdev write zeroes read block ...passed 00:07:39.692 Test: blockdev write zeroes read no split ...passed 00:07:39.692 Test: blockdev write zeroes read split ...passed 00:07:39.692 Test: blockdev write zeroes read split partial ...passed 00:07:39.692 Test: blockdev reset ...[2024-11-17 00:39:31.584058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:39.692 [2024-11-17 00:39:31.588338] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.692 passed 00:07:39.692 Test: blockdev write read 8 blocks ...passed 00:07:39.692 Test: blockdev write read size > 128k ...passed 00:07:39.692 Test: blockdev write read invalid size ...passed 00:07:39.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:39.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:39.692 Test: blockdev write read max offset ...passed 00:07:39.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:39.692 Test: blockdev writev readv 8 blocks ...passed 00:07:39.692 Test: blockdev writev readv 30 x 1block ...passed 00:07:39.692 Test: blockdev writev readv block ...passed 00:07:39.692 Test: blockdev writev readv size > 128k ...passed 00:07:39.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:39.692 Test: blockdev comparev and writev ...[2024-11-17 00:39:31.602398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ccc0e000 len:0x1000 00:07:39.692 [2024-11-17 00:39:31.602471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:39.692 passed 00:07:39.692 Test: blockdev nvme passthru rw ...passed 00:07:39.692 Test: blockdev nvme passthru vendor specific ...[2024-11-17 00:39:31.604848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:39.692 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:39.692 [2024-11-17 00:39:31.605051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:39.692 passed 00:07:39.692 Test: blockdev copy ...passed 00:07:39.692 Suite: bdevio tests on: Nvme2n3 00:07:39.692 Test: blockdev write read block ...passed 00:07:39.692 Test: blockdev write zeroes read block ...passed 00:07:39.692 Test: blockdev write zeroes read no split ...passed 00:07:39.692 Test: blockdev write zeroes read split ...passed 00:07:39.692 Test: blockdev write zeroes read split partial ...passed 00:07:39.692 Test: blockdev reset ...[2024-11-17 00:39:31.637581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:39.692 [2024-11-17 00:39:31.641904] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.692 passed 00:07:39.692 Test: blockdev write read 8 blocks ...passed 00:07:39.692 Test: blockdev write read size > 128k ...passed 00:07:39.692 Test: blockdev write read invalid size ...passed 00:07:39.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:39.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:39.692 Test: blockdev write read max offset ...passed 00:07:39.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:39.692 Test: blockdev writev readv 8 blocks ...passed 00:07:39.692 Test: blockdev writev readv 30 x 1block ...passed 00:07:39.692 Test: blockdev writev readv block ...passed 00:07:39.692 Test: blockdev writev readv size > 128k ...passed 00:07:39.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:39.692 Test: blockdev comparev and writev ...[2024-11-17 00:39:31.660234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ccc0a000 len:0x1000 00:07:39.692 [2024-11-17 00:39:31.660296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:39.692 passed 00:07:39.692 Test: blockdev nvme passthru rw ...passed 00:07:39.692 Test: blockdev nvme passthru vendor specific ...passed 00:07:39.692 Test: blockdev nvme admin passthru ...[2024-11-17 00:39:31.663016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:39.692 [2024-11-17 00:39:31.663068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:39.692 passed 00:07:39.692 Test: blockdev copy ...passed 00:07:39.692 Suite: bdevio tests on: Nvme2n2 00:07:39.692 Test: blockdev write read block ...passed 00:07:39.692 Test: blockdev write zeroes read block ...passed 00:07:39.692 Test: blockdev write zeroes read no split ...passed 00:07:39.692 Test: blockdev write zeroes read split ...passed 00:07:39.692 Test: blockdev write zeroes read split partial ...passed 00:07:39.692 Test: blockdev reset ...[2024-11-17 00:39:31.690822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:39.692 [2024-11-17 00:39:31.693253] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.692 passed 00:07:39.692 Test: blockdev write read 8 blocks ...passed 00:07:39.692 Test: blockdev write read size > 128k ...passed 00:07:39.692 Test: blockdev write read invalid size ...passed 00:07:39.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:39.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:39.692 Test: blockdev write read max offset ...passed 00:07:39.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:39.692 Test: blockdev writev readv 8 blocks ...passed 00:07:39.692 Test: blockdev writev readv 30 x 1block ...passed 00:07:39.692 Test: blockdev writev readv block ...passed 00:07:39.692 Test: blockdev writev readv size > 128k ...passed 00:07:39.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:39.692 Test: blockdev comparev and writev ...[2024-11-17 00:39:31.704763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0c05000 len:0x1000 00:07:39.692 [2024-11-17 00:39:31.704817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:39.692 passed 00:07:39.692 Test: blockdev nvme passthru rw ...passed 00:07:39.692 Test: blockdev nvme passthru vendor specific ...passed 00:07:39.692 Test: blockdev nvme admin passthru ...[2024-11-17 00:39:31.706366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:39.692 [2024-11-17 00:39:31.706404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:39.692 passed 00:07:39.692 Test: blockdev copy ...passed 00:07:39.692 Suite: bdevio tests on: Nvme2n1 00:07:39.692 Test: blockdev write read block ...passed 00:07:39.692 Test: blockdev write zeroes read block ...passed 00:07:39.692 Test: blockdev write zeroes read no split ...passed 00:07:39.692 Test: blockdev write zeroes read split ...passed 00:07:39.692 Test: blockdev write zeroes read split partial ...passed 00:07:39.692 Test: blockdev reset ...[2024-11-17 00:39:31.734849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:39.692 [2024-11-17 00:39:31.737370] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.692 passed 00:07:39.692 Test: blockdev write read 8 blocks ...passed 00:07:39.692 Test: blockdev write read size > 128k ...passed 00:07:39.692 Test: blockdev write read invalid size ...passed 00:07:39.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:39.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:39.692 Test: blockdev write read max offset ...passed 00:07:39.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:39.692 Test: blockdev writev readv 8 blocks ...passed 00:07:39.692 Test: blockdev writev readv 30 x 1block ...passed 00:07:39.692 Test: blockdev writev readv block ...passed 00:07:39.692 Test: blockdev writev readv size > 128k ...passed 00:07:39.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:39.692 Test: blockdev comparev and writev ...[2024-11-17 00:39:31.745694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc802000 len:0x1000 00:07:39.693 [2024-11-17 00:39:31.745749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:39.693 passed 00:07:39.693 Test: blockdev nvme passthru rw ...passed 00:07:39.693 Test: blockdev nvme passthru vendor specific ...passed 00:07:39.693 Test: blockdev nvme admin passthru ...[2024-11-17 00:39:31.747173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:39.693 [2024-11-17 00:39:31.747212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:39.955 passed 00:07:39.955 Test: blockdev copy ...passed 00:07:39.955 Suite: bdevio tests on: Nvme1n1p2 00:07:39.955 Test: blockdev write read block ...passed 00:07:39.955 Test: blockdev write zeroes read block ...passed 00:07:39.955 Test: blockdev write zeroes read no split ...passed 00:07:39.955 Test: blockdev write zeroes read split ...passed 00:07:39.955 Test: blockdev write zeroes read split partial ...passed 00:07:39.955 Test: blockdev reset ...[2024-11-17 00:39:31.779290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:39.955 [2024-11-17 00:39:31.781385] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.955 passed 00:07:39.955 Test: blockdev write read 8 blocks ...passed 00:07:39.955 Test: blockdev write read size > 128k ...passed 00:07:39.955 Test: blockdev write read invalid size ...passed 00:07:39.955 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:39.955 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:39.955 Test: blockdev write read max offset ...passed 00:07:39.955 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:39.955 Test: blockdev writev readv 8 blocks ...passed 00:07:39.956 Test: blockdev writev readv 30 x 1block ...passed 00:07:39.956 Test: blockdev writev readv block ...passed 00:07:39.956 Test: blockdev writev readv size > 128k ...passed 00:07:39.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:39.956 Test: blockdev comparev and writev ...[2024-11-17 00:39:31.789649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e463b000 len:0x1000 00:07:39.956 [2024-11-17 00:39:31.789707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:39.956 passed 00:07:39.956 Test: blockdev nvme passthru rw ...passed 00:07:39.956 Test: blockdev nvme passthru vendor specific ...passed 00:07:39.956 Test: blockdev nvme admin passthru ...passed 00:07:39.956 Test: blockdev copy ...passed 00:07:39.956 Suite: bdevio tests on: Nvme1n1p1 00:07:39.956 Test: blockdev write read block ...passed 00:07:39.956 Test: blockdev write zeroes read block ...passed 00:07:39.956 Test: blockdev write zeroes read no split ...passed 00:07:39.956 Test: blockdev write zeroes read split ...passed 00:07:39.956 Test: blockdev write zeroes read split partial ...passed 00:07:39.956 Test: blockdev reset ...[2024-11-17 00:39:31.811004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:39.956 [2024-11-17 00:39:31.813161] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.956 passed 00:07:39.956 Test: blockdev write read 8 blocks ...passed 00:07:39.956 Test: blockdev write read size > 128k ...passed 00:07:39.956 Test: blockdev write read invalid size ...passed 00:07:39.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:39.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:39.956 Test: blockdev write read max offset ...passed 00:07:39.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:39.956 Test: blockdev writev readv 8 blocks ...passed 00:07:39.956 Test: blockdev writev readv 30 x 1block ...passed 00:07:39.956 Test: blockdev writev readv block ...passed 00:07:39.956 Test: blockdev writev readv size > 128k ...passed 00:07:39.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:39.956 Test: blockdev comparev and writev ...[2024-11-17 00:39:31.830013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e4637000 len:0x1000 00:07:39.956 [2024-11-17 00:39:31.830073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:39.956 passed 00:07:39.956 Test: blockdev nvme passthru rw ...passed 00:07:39.956 Test: blockdev nvme passthru vendor specific ...passed 00:07:39.956 Test: blockdev nvme admin passthru ...passed 00:07:39.956 Test: blockdev copy ...passed 00:07:39.956 Suite: bdevio tests on: Nvme0n1 00:07:39.956 Test: blockdev write read block ...passed 00:07:39.956 Test: blockdev write zeroes read block ...passed 00:07:39.956 Test: blockdev write zeroes read no split ...passed 00:07:39.956 Test: blockdev write zeroes read split ...passed 00:07:39.956 Test: blockdev write zeroes read split partial ...passed 00:07:39.956 Test: blockdev reset ...[2024-11-17 00:39:31.855363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:39.956 [2024-11-17 00:39:31.857581] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.956 passed 00:07:39.956 Test: blockdev write read 8 blocks ...passed 00:07:39.956 Test: blockdev write read size > 128k ...passed 00:07:39.956 Test: blockdev write read invalid size ...passed 00:07:39.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:39.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:39.956 Test: blockdev write read max offset ...passed 00:07:39.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:39.956 Test: blockdev writev readv 8 blocks ...passed 00:07:39.956 Test: blockdev writev readv 30 x 1block ...passed 00:07:39.956 Test: blockdev writev readv block ...passed 00:07:39.956 Test: blockdev writev readv size > 128k ...passed 00:07:39.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:39.956 Test: blockdev comparev and writev ...passed 00:07:39.956 Test: blockdev nvme passthru rw ...[2024-11-17 00:39:31.865343] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:39.956 separate metadata which is not supported yet. 00:07:39.956 passed 00:07:39.956 Test: blockdev nvme passthru vendor specific ...passed 00:07:39.956 Test: blockdev nvme admin passthru ...[2024-11-17 00:39:31.865958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:39.956 [2024-11-17 00:39:31.866022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:39.956 passed 00:07:39.956 Test: blockdev copy ...passed 00:07:39.956 00:07:39.956 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.956 suites 7 7 n/a 0 0 00:07:39.956 tests 161 161 161 0 0 00:07:39.956 asserts 1025 1025 1025 0 n/a 00:07:39.956 00:07:39.956 Elapsed time = 0.722 seconds 00:07:39.956 0 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73780 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73780 ']' 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73780 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73780 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73780' 00:07:39.956 killing process with pid 73780 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73780 00:07:39.956 00:39:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73780 00:07:40.218 00:39:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:40.218 00:07:40.218 real 0m1.699s 00:07:40.218 user 0m3.991s 00:07:40.218 sys 0m0.392s 00:07:40.218 00:39:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.218 00:39:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:40.218 ************************************ 00:07:40.218 END TEST bdev_bounds 00:07:40.218 ************************************ 00:07:40.218 00:39:32 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:40.218 00:39:32 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:40.218 00:39:32 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.218 00:39:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.481 ************************************ 00:07:40.481 START TEST bdev_nbd 00:07:40.481 ************************************ 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73828 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73828 /var/tmp/spdk-nbd.sock 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73828 ']' 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:40.481 00:39:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:40.481 [2024-11-17 00:39:32.368504] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.481 [2024-11-17 00:39:32.368832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.481 [2024-11-17 00:39:32.517470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.743 [2024-11-17 00:39:32.589606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.314 1+0 records in 00:07:41.314 1+0 records out 00:07:41.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556338 s, 7.4 MB/s 00:07:41.314 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.573 1+0 records in 00:07:41.573 1+0 records out 00:07:41.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514835 s, 8.0 MB/s 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.573 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.831 1+0 records in 00:07:41.831 1+0 records out 00:07:41.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695702 s, 5.9 MB/s 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:41.831 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.832 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.832 00:39:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:41.832 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.832 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.832 00:39:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:42.089 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:42.089 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.090 1+0 records in 00:07:42.090 1+0 records out 00:07:42.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00236182 s, 1.7 MB/s 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.090 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.348 1+0 records in 00:07:42.348 1+0 records out 00:07:42.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599968 s, 6.8 MB/s 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.348 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.606 1+0 records in 00:07:42.606 1+0 records out 00:07:42.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692327 s, 5.9 MB/s 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.606 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.866 1+0 records in 00:07:42.866 1+0 records out 00:07:42.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739108 s, 5.5 MB/s 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.866 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.126 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:43.126 { 00:07:43.126 "nbd_device": "/dev/nbd0", 00:07:43.126 "bdev_name": "Nvme0n1" 00:07:43.126 }, 00:07:43.126 { 00:07:43.126 "nbd_device": "/dev/nbd1", 00:07:43.126 "bdev_name": "Nvme1n1p1" 00:07:43.126 }, 00:07:43.126 { 00:07:43.126 "nbd_device": "/dev/nbd2", 00:07:43.126 "bdev_name": "Nvme1n1p2" 00:07:43.126 }, 00:07:43.126 { 00:07:43.126 "nbd_device": "/dev/nbd3", 00:07:43.126 "bdev_name": "Nvme2n1" 00:07:43.126 }, 00:07:43.126 { 00:07:43.127 "nbd_device": "/dev/nbd4", 00:07:43.127 "bdev_name": "Nvme2n2" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd5", 00:07:43.127 "bdev_name": "Nvme2n3" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd6", 00:07:43.127 "bdev_name": "Nvme3n1" 00:07:43.127 } 00:07:43.127 ]' 00:07:43.127 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:43.127 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:43.127 00:39:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd0", 00:07:43.127 "bdev_name": "Nvme0n1" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd1", 00:07:43.127 "bdev_name": "Nvme1n1p1" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd2", 00:07:43.127 "bdev_name": "Nvme1n1p2" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd3", 00:07:43.127 "bdev_name": "Nvme2n1" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd4", 00:07:43.127 "bdev_name": "Nvme2n2" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd5", 00:07:43.127 "bdev_name": "Nvme2n3" 00:07:43.127 }, 00:07:43.127 { 00:07:43.127 "nbd_device": "/dev/nbd6", 00:07:43.127 "bdev_name": "Nvme3n1" 00:07:43.127 } 00:07:43.127 ]' 00:07:43.127 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:43.127 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.127 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:43.127 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.127 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:43.127 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.127 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.388 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.650 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.912 00:39:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.173 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.434 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.694 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.956 00:39:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:45.218 /dev/nbd0 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.218 1+0 records in 00:07:45.218 1+0 records out 00:07:45.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122507 s, 3.3 MB/s 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.218 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:45.479 /dev/nbd1 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.479 1+0 records in 00:07:45.479 1+0 records out 00:07:45.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504904 s, 8.1 MB/s 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.479 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:45.741 /dev/nbd10 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.741 1+0 records in 00:07:45.741 1+0 records out 00:07:45.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442852 s, 9.2 MB/s 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:45.741 /dev/nbd11 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.741 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.742 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.003 1+0 records in 00:07:46.003 1+0 records out 00:07:46.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298283 s, 13.7 MB/s 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.003 00:39:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:46.003 /dev/nbd12 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.003 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.004 1+0 records in 00:07:46.004 1+0 records out 00:07:46.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657743 s, 6.2 MB/s 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.004 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:46.265 /dev/nbd13 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.265 1+0 records in 00:07:46.265 1+0 records out 00:07:46.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405915 s, 10.1 MB/s 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.265 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:46.526 /dev/nbd14 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.526 1+0 records in 00:07:46.526 1+0 records out 00:07:46.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105508 s, 3.9 MB/s 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.526 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.787 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd0", 00:07:46.787 "bdev_name": "Nvme0n1" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd1", 00:07:46.787 "bdev_name": "Nvme1n1p1" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd10", 00:07:46.787 "bdev_name": "Nvme1n1p2" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd11", 00:07:46.787 "bdev_name": "Nvme2n1" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd12", 00:07:46.787 "bdev_name": "Nvme2n2" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd13", 00:07:46.787 "bdev_name": "Nvme2n3" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd14", 00:07:46.787 "bdev_name": "Nvme3n1" 00:07:46.787 } 00:07:46.787 ]' 00:07:46.787 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd0", 00:07:46.787 "bdev_name": "Nvme0n1" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd1", 00:07:46.787 "bdev_name": "Nvme1n1p1" 00:07:46.787 }, 00:07:46.787 { 00:07:46.787 "nbd_device": "/dev/nbd10", 00:07:46.787 "bdev_name": "Nvme1n1p2" 00:07:46.787 }, 00:07:46.788 { 00:07:46.788 "nbd_device": "/dev/nbd11", 00:07:46.788 "bdev_name": "Nvme2n1" 00:07:46.788 }, 00:07:46.788 { 00:07:46.788 "nbd_device": "/dev/nbd12", 00:07:46.788 "bdev_name": "Nvme2n2" 00:07:46.788 }, 00:07:46.788 { 00:07:46.788 "nbd_device": "/dev/nbd13", 00:07:46.788 "bdev_name": "Nvme2n3" 00:07:46.788 }, 00:07:46.788 { 00:07:46.788 "nbd_device": "/dev/nbd14", 00:07:46.788 "bdev_name": "Nvme3n1" 00:07:46.788 } 00:07:46.788 ]' 00:07:46.788 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.788 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:46.788 /dev/nbd1 00:07:46.788 /dev/nbd10 00:07:46.788 /dev/nbd11 00:07:46.788 /dev/nbd12 00:07:46.788 /dev/nbd13 00:07:46.788 /dev/nbd14' 00:07:46.788 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:46.788 /dev/nbd1 00:07:46.788 /dev/nbd10 00:07:46.788 /dev/nbd11 00:07:46.788 /dev/nbd12 00:07:46.788 /dev/nbd13 00:07:46.788 /dev/nbd14' 00:07:46.788 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:47.049 256+0 records in 00:07:47.049 256+0 records out 00:07:47.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00836795 s, 125 MB/s 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:47.049 256+0 records in 00:07:47.049 256+0 records out 00:07:47.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0895634 s, 11.7 MB/s 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.049 00:39:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:47.321 256+0 records in 00:07:47.321 256+0 records out 00:07:47.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200176 s, 5.2 MB/s 00:07:47.321 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.321 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:47.582 256+0 records in 00:07:47.582 256+0 records out 00:07:47.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245718 s, 4.3 MB/s 00:07:47.582 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.582 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:47.842 256+0 records in 00:07:47.842 256+0 records out 00:07:47.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23538 s, 4.5 MB/s 00:07:47.842 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.842 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:47.842 256+0 records in 00:07:47.842 256+0 records out 00:07:47.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243836 s, 4.3 MB/s 00:07:47.842 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.101 00:39:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:48.101 256+0 records in 00:07:48.101 256+0 records out 00:07:48.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240535 s, 4.4 MB/s 00:07:48.101 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.101 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:48.364 256+0 records in 00:07:48.364 256+0 records out 00:07:48.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230889 s, 4.5 MB/s 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.364 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.625 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.887 00:39:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.150 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.411 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.676 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.984 00:39:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:50.249 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:50.507 malloc_lvol_verify 00:07:50.507 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:50.507 675af263-929b-4b9e-940c-4f92b060b404 00:07:50.507 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:50.767 6349f0f8-5eba-4fad-a405-a462ecf440ca 00:07:50.767 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:51.029 /dev/nbd0 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:51.029 mke2fs 1.47.0 (5-Feb-2023) 00:07:51.029 Discarding device blocks: 0/4096 done 00:07:51.029 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:51.029 00:07:51.029 Allocating group tables: 0/1 done 00:07:51.029 Writing inode tables: 0/1 done 00:07:51.029 Creating journal (1024 blocks): done 00:07:51.029 Writing superblocks and filesystem accounting information: 0/1 done 00:07:51.029 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.029 00:39:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73828 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73828 ']' 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73828 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73828 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.290 killing process with pid 73828 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73828' 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73828 00:07:51.290 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73828 00:07:51.552 00:39:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:51.552 00:07:51.552 real 0m11.133s 00:07:51.552 user 0m15.321s 00:07:51.552 sys 0m3.973s 00:07:51.552 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.552 ************************************ 00:07:51.552 END TEST bdev_nbd 00:07:51.552 ************************************ 00:07:51.552 00:39:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:51.552 00:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:51.552 00:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:51.552 00:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:51.552 skipping fio tests on NVMe due to multi-ns failures. 00:07:51.552 00:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:51.552 00:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:51.552 00:39:43 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:51.552 00:39:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:51.552 00:39:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.552 00:39:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.552 ************************************ 00:07:51.552 START TEST bdev_verify 00:07:51.552 ************************************ 00:07:51.552 00:39:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:51.552 [2024-11-17 00:39:43.544579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.552 [2024-11-17 00:39:43.544707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74246 ] 00:07:51.813 [2024-11-17 00:39:43.695141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.813 [2024-11-17 00:39:43.740154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.813 [2024-11-17 00:39:43.740229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.384 Running I/O for 5 seconds... 00:07:54.713 21376.00 IOPS, 83.50 MiB/s [2024-11-17T00:39:47.718Z] 20320.00 IOPS, 79.38 MiB/s [2024-11-17T00:39:48.661Z] 20053.33 IOPS, 78.33 MiB/s [2024-11-17T00:39:49.604Z] 19888.00 IOPS, 77.69 MiB/s [2024-11-17T00:39:49.604Z] 19276.80 IOPS, 75.30 MiB/s 00:07:57.541 Latency(us) 00:07:57.541 [2024-11-17T00:39:49.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.541 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x0 length 0xbd0bd 00:07:57.541 Nvme0n1 : 5.08 1359.38 5.31 0.00 0.00 93893.80 15627.82 77836.60 00:07:57.541 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:57.541 Nvme0n1 : 5.09 1358.55 5.31 0.00 0.00 94009.17 19358.33 87112.47 00:07:57.541 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x0 length 0x4ff80 00:07:57.541 Nvme1n1p1 : 5.09 1358.48 5.31 0.00 0.00 93759.31 18047.61 70980.53 00:07:57.541 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:57.541 Nvme1n1p1 : 5.09 1357.05 5.30 0.00 0.00 93929.08 21576.47 77030.01 00:07:57.541 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x0 length 0x4ff7f 00:07:57.541 Nvme1n1p2 : 5.09 1357.48 5.30 0.00 0.00 93675.56 20971.52 69770.63 00:07:57.541 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:57.541 Nvme1n1p2 : 5.10 1355.78 5.30 0.00 0.00 93699.28 23290.49 72593.72 00:07:57.541 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x0 length 0x80000 00:07:57.541 Nvme2n1 : 5.09 1357.05 5.30 0.00 0.00 93523.74 20366.57 66947.54 00:07:57.541 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x80000 length 0x80000 00:07:57.541 Nvme2n1 : 5.10 1354.66 5.29 0.00 0.00 93528.54 25407.80 71787.13 00:07:57.541 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x0 length 0x80000 00:07:57.541 Nvme2n2 : 5.10 1355.82 5.30 0.00 0.00 93424.05 18854.20 68560.74 00:07:57.541 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x80000 length 0x80000 00:07:57.541 Nvme2n2 : 5.11 1353.62 5.29 0.00 0.00 93381.01 18450.90 71787.13 00:07:57.541 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x0 length 0x80000 00:07:57.541 Nvme2n3 : 5.10 1354.71 5.29 0.00 0.00 93332.29 18350.08 70173.93 00:07:57.541 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x80000 length 0x80000 00:07:57.541 Nvme2n3 : 5.11 1352.66 5.28 0.00 0.00 93246.21 16131.94 74610.22 00:07:57.541 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x0 length 0x20000 00:07:57.541 Nvme3n1 : 5.11 1353.69 5.29 0.00 0.00 93234.82 13510.50 74610.22 00:07:57.541 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.541 Verification LBA range: start 0x20000 length 0x20000 00:07:57.541 Nvme3n1 : 5.11 1352.28 5.28 0.00 0.00 93170.41 16031.11 77030.01 00:07:57.541 [2024-11-17T00:39:49.604Z] =================================================================================================================== 00:07:57.541 [2024-11-17T00:39:49.604Z] Total : 18981.20 74.15 0.00 0.00 93557.66 13510.50 87112.47 00:07:58.112 00:07:58.112 real 0m6.436s 00:07:58.112 user 0m11.994s 00:07:58.112 sys 0m0.257s 00:07:58.112 00:39:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.112 ************************************ 00:07:58.113 END TEST bdev_verify 00:07:58.113 ************************************ 00:07:58.113 00:39:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:58.113 00:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.113 00:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:58.113 00:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.113 00:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.113 ************************************ 00:07:58.113 START TEST bdev_verify_big_io 00:07:58.113 ************************************ 00:07:58.113 00:39:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.113 [2024-11-17 00:39:50.062003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:58.113 [2024-11-17 00:39:50.062148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74339 ] 00:07:58.373 [2024-11-17 00:39:50.214767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.373 [2024-11-17 00:39:50.289120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.373 [2024-11-17 00:39:50.289256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.944 Running I/O for 5 seconds... 00:08:04.035 271.00 IOPS, 16.94 MiB/s [2024-11-17T00:39:57.042Z] 2559.00 IOPS, 159.94 MiB/s [2024-11-17T00:39:57.042Z] 3310.67 IOPS, 206.92 MiB/s 00:08:04.979 Latency(us) 00:08:04.979 [2024-11-17T00:39:57.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.979 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:04.979 Verification LBA range: start 0x0 length 0xbd0b 00:08:04.979 Nvme0n1 : 5.68 118.13 7.38 0.00 0.00 1034921.10 38111.70 1219574.55 00:08:04.980 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:04.980 Nvme0n1 : 5.77 117.41 7.34 0.00 0.00 1038446.70 16837.71 1232480.10 00:08:04.980 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x0 length 0x4ff8 00:08:04.980 Nvme1n1p1 : 5.86 119.87 7.49 0.00 0.00 986265.24 96791.63 1206669.00 00:08:04.980 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:04.980 Nvme1n1p1 : 5.77 122.05 7.63 0.00 0.00 984489.93 89532.26 1051802.39 00:08:04.980 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x0 length 0x4ff7 00:08:04.980 Nvme1n1p2 : 5.95 124.86 7.80 0.00 0.00 923132.54 86709.17 851766.35 00:08:04.980 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:04.980 Nvme1n1p2 : 5.86 118.47 7.40 0.00 0.00 969406.30 89935.56 980821.86 00:08:04.980 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x0 length 0x8000 00:08:04.980 Nvme2n1 : 5.95 129.00 8.06 0.00 0.00 874453.20 83886.08 858219.13 00:08:04.980 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x8000 length 0x8000 00:08:04.980 Nvme2n1 : 5.86 117.40 7.34 0.00 0.00 963942.53 88322.36 1780966.01 00:08:04.980 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x0 length 0x8000 00:08:04.980 Nvme2n2 : 6.02 132.00 8.25 0.00 0.00 825594.62 58881.58 877577.45 00:08:04.980 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x8000 length 0x8000 00:08:04.980 Nvme2n2 : 6.01 124.71 7.79 0.00 0.00 881547.25 64931.05 2000360.37 00:08:04.980 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x0 length 0x8000 00:08:04.980 Nvme2n3 : 6.09 142.10 8.88 0.00 0.00 747094.45 24702.03 916294.10 00:08:04.980 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x8000 length 0x8000 00:08:04.980 Nvme2n3 : 6.09 135.48 8.47 0.00 0.00 789716.87 21072.34 1845493.76 00:08:04.980 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x0 length 0x2000 00:08:04.980 Nvme3n1 : 6.10 158.15 9.88 0.00 0.00 657308.15 1304.42 796917.76 00:08:04.980 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:04.980 Verification LBA range: start 0x2000 length 0x2000 00:08:04.980 Nvme3n1 : 6.10 149.61 9.35 0.00 0.00 693620.57 645.91 1871304.86 00:08:04.980 [2024-11-17T00:39:57.043Z] =================================================================================================================== 00:08:04.980 [2024-11-17T00:39:57.043Z] Total : 1809.23 113.08 0.00 0.00 869830.94 645.91 2000360.37 00:08:05.921 00:08:05.921 real 0m7.819s 00:08:05.921 user 0m14.683s 00:08:05.921 sys 0m0.358s 00:08:05.921 00:39:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.921 00:39:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:05.921 ************************************ 00:08:05.921 END TEST bdev_verify_big_io 00:08:05.921 ************************************ 00:08:05.921 00:39:57 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:05.921 00:39:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:05.921 00:39:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.921 00:39:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:05.921 ************************************ 00:08:05.921 START TEST bdev_write_zeroes 00:08:05.921 ************************************ 00:08:05.921 00:39:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:05.921 [2024-11-17 00:39:57.922932] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.922 [2024-11-17 00:39:57.923034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74437 ] 00:08:06.180 [2024-11-17 00:39:58.065656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.180 [2024-11-17 00:39:58.105639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.746 Running I/O for 1 seconds... 00:08:07.695 70336.00 IOPS, 274.75 MiB/s 00:08:07.695 Latency(us) 00:08:07.695 [2024-11-17T00:39:59.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.695 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:07.695 Nvme0n1 : 1.03 9973.25 38.96 0.00 0.00 12803.79 9931.22 23996.26 00:08:07.695 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:07.695 Nvme1n1p1 : 1.03 9961.11 38.91 0.00 0.00 12801.68 11141.12 24097.08 00:08:07.695 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:07.695 Nvme1n1p2 : 1.03 9949.05 38.86 0.00 0.00 12780.26 9779.99 23492.14 00:08:07.695 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:07.695 Nvme2n1 : 1.03 9937.83 38.82 0.00 0.00 12760.37 8318.03 22685.54 00:08:07.695 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:07.695 Nvme2n2 : 1.03 9926.66 38.78 0.00 0.00 12755.70 8015.56 22282.24 00:08:07.695 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:07.695 Nvme2n3 : 1.03 9915.57 38.73 0.00 0.00 12750.62 7713.08 22383.06 00:08:07.695 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:07.695 Nvme3n1 : 1.03 9904.50 38.69 0.00 0.00 12746.04 7158.55 24097.08 00:08:07.695 [2024-11-17T00:39:59.758Z] =================================================================================================================== 00:08:07.695 [2024-11-17T00:39:59.758Z] Total : 69567.96 271.75 0.00 0.00 12771.21 7158.55 24097.08 00:08:07.952 00:08:07.952 real 0m1.897s 00:08:07.952 user 0m1.607s 00:08:07.952 sys 0m0.181s 00:08:07.952 00:39:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.952 00:39:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 ************************************ 00:08:07.952 END TEST bdev_write_zeroes 00:08:07.952 ************************************ 00:08:07.952 00:39:59 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.952 00:39:59 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:07.952 00:39:59 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.952 00:39:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 ************************************ 00:08:07.952 START TEST bdev_json_nonenclosed 00:08:07.952 ************************************ 00:08:07.952 00:39:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.952 [2024-11-17 00:39:59.884756] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.952 [2024-11-17 00:39:59.884888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74479 ] 00:08:08.211 [2024-11-17 00:40:00.031174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.211 [2024-11-17 00:40:00.073703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.211 [2024-11-17 00:40:00.073802] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:08.211 [2024-11-17 00:40:00.073818] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.211 [2024-11-17 00:40:00.073830] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.211 00:08:08.211 real 0m0.342s 00:08:08.211 user 0m0.135s 00:08:08.211 sys 0m0.104s 00:08:08.211 00:40:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.211 ************************************ 00:08:08.211 END TEST bdev_json_nonenclosed 00:08:08.211 00:40:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:08.211 ************************************ 00:08:08.211 00:40:00 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.211 00:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:08.211 00:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.211 00:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.211 ************************************ 00:08:08.211 START TEST bdev_json_nonarray 00:08:08.211 ************************************ 00:08:08.211 00:40:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.211 [2024-11-17 00:40:00.270505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.211 [2024-11-17 00:40:00.270602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74510 ] 00:08:08.470 [2024-11-17 00:40:00.415808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.470 [2024-11-17 00:40:00.458450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.470 [2024-11-17 00:40:00.458558] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:08.470 [2024-11-17 00:40:00.458578] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.470 [2024-11-17 00:40:00.458590] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.728 00:08:08.728 real 0m0.333s 00:08:08.728 user 0m0.136s 00:08:08.728 sys 0m0.094s 00:08:08.728 00:40:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.728 ************************************ 00:08:08.728 END TEST bdev_json_nonarray 00:08:08.728 ************************************ 00:08:08.728 00:40:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:08.729 00:40:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:08.729 00:40:00 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:08.729 00:40:00 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:08.729 00:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.729 00:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.729 00:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.729 ************************************ 00:08:08.729 START TEST bdev_gpt_uuid 00:08:08.729 ************************************ 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74530 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74530 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74530 ']' 00:08:08.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:08.729 00:40:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:08.729 [2024-11-17 00:40:00.675034] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.729 [2024-11-17 00:40:00.675164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74530 ] 00:08:08.986 [2024-11-17 00:40:00.817786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.986 [2024-11-17 00:40:00.860507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.553 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.553 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:09.553 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.553 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.553 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:09.811 Some configs were skipped because the RPC state that can call them passed over. 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:09.811 { 00:08:09.811 "name": "Nvme1n1p1", 00:08:09.811 "aliases": [ 00:08:09.811 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:09.811 ], 00:08:09.811 "product_name": "GPT Disk", 00:08:09.811 "block_size": 4096, 00:08:09.811 "num_blocks": 655104, 00:08:09.811 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:09.811 "assigned_rate_limits": { 00:08:09.811 "rw_ios_per_sec": 0, 00:08:09.811 "rw_mbytes_per_sec": 0, 00:08:09.811 "r_mbytes_per_sec": 0, 00:08:09.811 "w_mbytes_per_sec": 0 00:08:09.811 }, 00:08:09.811 "claimed": false, 00:08:09.811 "zoned": false, 00:08:09.811 "supported_io_types": { 00:08:09.811 "read": true, 00:08:09.811 "write": true, 00:08:09.811 "unmap": true, 00:08:09.811 "flush": true, 00:08:09.811 "reset": true, 00:08:09.811 "nvme_admin": false, 00:08:09.811 "nvme_io": false, 00:08:09.811 "nvme_io_md": false, 00:08:09.811 "write_zeroes": true, 00:08:09.811 "zcopy": false, 00:08:09.811 "get_zone_info": false, 00:08:09.811 "zone_management": false, 00:08:09.811 "zone_append": false, 00:08:09.811 "compare": true, 00:08:09.811 "compare_and_write": false, 00:08:09.811 "abort": true, 00:08:09.811 "seek_hole": false, 00:08:09.811 "seek_data": false, 00:08:09.811 "copy": true, 00:08:09.811 "nvme_iov_md": false 00:08:09.811 }, 00:08:09.811 "driver_specific": { 00:08:09.811 "gpt": { 00:08:09.811 "base_bdev": "Nvme1n1", 00:08:09.811 "offset_blocks": 256, 00:08:09.811 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:09.811 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:09.811 "partition_name": "SPDK_TEST_first" 00:08:09.811 } 00:08:09.811 } 00:08:09.811 } 00:08:09.811 ]' 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:09.811 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:10.087 { 00:08:10.087 "name": "Nvme1n1p2", 00:08:10.087 "aliases": [ 00:08:10.087 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:10.087 ], 00:08:10.087 "product_name": "GPT Disk", 00:08:10.087 "block_size": 4096, 00:08:10.087 "num_blocks": 655103, 00:08:10.087 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:10.087 "assigned_rate_limits": { 00:08:10.087 "rw_ios_per_sec": 0, 00:08:10.087 "rw_mbytes_per_sec": 0, 00:08:10.087 "r_mbytes_per_sec": 0, 00:08:10.087 "w_mbytes_per_sec": 0 00:08:10.087 }, 00:08:10.087 "claimed": false, 00:08:10.087 "zoned": false, 00:08:10.087 "supported_io_types": { 00:08:10.087 "read": true, 00:08:10.087 "write": true, 00:08:10.087 "unmap": true, 00:08:10.087 "flush": true, 00:08:10.087 "reset": true, 00:08:10.087 "nvme_admin": false, 00:08:10.087 "nvme_io": false, 00:08:10.087 "nvme_io_md": false, 00:08:10.087 "write_zeroes": true, 00:08:10.087 "zcopy": false, 00:08:10.087 "get_zone_info": false, 00:08:10.087 "zone_management": false, 00:08:10.087 "zone_append": false, 00:08:10.087 "compare": true, 00:08:10.087 "compare_and_write": false, 00:08:10.087 "abort": true, 00:08:10.087 "seek_hole": false, 00:08:10.087 "seek_data": false, 00:08:10.087 "copy": true, 00:08:10.087 "nvme_iov_md": false 00:08:10.087 }, 00:08:10.087 "driver_specific": { 00:08:10.087 "gpt": { 00:08:10.087 "base_bdev": "Nvme1n1", 00:08:10.087 "offset_blocks": 655360, 00:08:10.087 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:10.087 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:10.087 "partition_name": "SPDK_TEST_second" 00:08:10.087 } 00:08:10.087 } 00:08:10.087 } 00:08:10.087 ]' 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:10.087 00:40:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74530 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74530 ']' 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74530 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74530 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.087 killing process with pid 74530 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74530' 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74530 00:08:10.087 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74530 00:08:10.352 00:08:10.352 real 0m1.786s 00:08:10.352 user 0m1.857s 00:08:10.352 sys 0m0.387s 00:08:10.352 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.352 ************************************ 00:08:10.352 END TEST bdev_gpt_uuid 00:08:10.352 ************************************ 00:08:10.352 00:40:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:10.611 00:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:10.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:10.870 Waiting for block devices as requested 00:08:10.870 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.129 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.129 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.129 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:16.394 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:16.394 00:40:08 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:16.394 00:40:08 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:16.394 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.394 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.395 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:16.395 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:16.395 00:40:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:16.395 00:08:16.395 real 0m49.948s 00:08:16.395 user 1m2.215s 00:08:16.395 sys 0m8.821s 00:08:16.395 ************************************ 00:08:16.395 END TEST blockdev_nvme_gpt 00:08:16.395 ************************************ 00:08:16.395 00:40:08 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.395 00:40:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.653 00:40:08 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:16.653 00:40:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.653 00:40:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.653 00:40:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.653 ************************************ 00:08:16.653 START TEST nvme 00:08:16.653 ************************************ 00:08:16.653 00:40:08 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:16.653 * Looking for test storage... 00:08:16.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:16.653 00:40:08 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:16.653 00:40:08 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:16.653 00:40:08 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:16.653 00:40:08 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:16.653 00:40:08 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.653 00:40:08 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.653 00:40:08 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.653 00:40:08 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.653 00:40:08 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.653 00:40:08 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.653 00:40:08 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.653 00:40:08 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.653 00:40:08 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.653 00:40:08 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.653 00:40:08 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.653 00:40:08 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:16.653 00:40:08 nvme -- scripts/common.sh@345 -- # : 1 00:08:16.653 00:40:08 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.653 00:40:08 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.653 00:40:08 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:16.653 00:40:08 nvme -- scripts/common.sh@353 -- # local d=1 00:08:16.653 00:40:08 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.653 00:40:08 nvme -- scripts/common.sh@355 -- # echo 1 00:08:16.653 00:40:08 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.653 00:40:08 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:16.653 00:40:08 nvme -- scripts/common.sh@353 -- # local d=2 00:08:16.654 00:40:08 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.654 00:40:08 nvme -- scripts/common.sh@355 -- # echo 2 00:08:16.654 00:40:08 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.654 00:40:08 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.654 00:40:08 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.654 00:40:08 nvme -- scripts/common.sh@368 -- # return 0 00:08:16.654 00:40:08 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.654 00:40:08 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:16.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.654 --rc genhtml_branch_coverage=1 00:08:16.654 --rc genhtml_function_coverage=1 00:08:16.654 --rc genhtml_legend=1 00:08:16.654 --rc geninfo_all_blocks=1 00:08:16.654 --rc geninfo_unexecuted_blocks=1 00:08:16.654 00:08:16.654 ' 00:08:16.654 00:40:08 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:16.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.654 --rc genhtml_branch_coverage=1 00:08:16.654 --rc genhtml_function_coverage=1 00:08:16.654 --rc genhtml_legend=1 00:08:16.654 --rc geninfo_all_blocks=1 00:08:16.654 --rc geninfo_unexecuted_blocks=1 00:08:16.654 00:08:16.654 ' 00:08:16.654 00:40:08 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:16.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.654 --rc genhtml_branch_coverage=1 00:08:16.654 --rc genhtml_function_coverage=1 00:08:16.654 --rc genhtml_legend=1 00:08:16.654 --rc geninfo_all_blocks=1 00:08:16.654 --rc geninfo_unexecuted_blocks=1 00:08:16.654 00:08:16.654 ' 00:08:16.654 00:40:08 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:16.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.654 --rc genhtml_branch_coverage=1 00:08:16.654 --rc genhtml_function_coverage=1 00:08:16.654 --rc genhtml_legend=1 00:08:16.654 --rc geninfo_all_blocks=1 00:08:16.654 --rc geninfo_unexecuted_blocks=1 00:08:16.654 00:08:16.654 ' 00:08:16.654 00:40:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:17.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:17.489 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:17.489 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:17.489 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:17.747 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:17.747 00:40:09 nvme -- nvme/nvme.sh@79 -- # uname 00:08:17.747 00:40:09 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:17.747 00:40:09 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:17.747 00:40:09 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:17.747 Waiting for stub to ready for secondary processes... 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1071 -- # stubpid=75154 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75154 ]] 00:08:17.747 00:40:09 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:17.747 [2024-11-17 00:40:09.664096] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.747 [2024-11-17 00:40:09.664221] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:18.682 [2024-11-17 00:40:10.591145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.682 [2024-11-17 00:40:10.615781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.682 [2024-11-17 00:40:10.616024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.682 [2024-11-17 00:40:10.616034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.682 [2024-11-17 00:40:10.627749] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:18.682 [2024-11-17 00:40:10.627788] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.682 [2024-11-17 00:40:10.638155] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:18.682 00:40:10 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:18.682 00:40:10 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75154 ]] 00:08:18.682 [2024-11-17 00:40:10.638438] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:18.682 00:40:10 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:18.682 [2024-11-17 00:40:10.640073] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.682 [2024-11-17 00:40:10.640338] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:18.682 [2024-11-17 00:40:10.640435] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:18.682 [2024-11-17 00:40:10.642519] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.682 [2024-11-17 00:40:10.642779] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:18.682 [2024-11-17 00:40:10.642865] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:18.682 [2024-11-17 00:40:10.645001] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.682 [2024-11-17 00:40:10.645246] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:18.682 [2024-11-17 00:40:10.645339] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:18.682 [2024-11-17 00:40:10.645463] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:18.682 [2024-11-17 00:40:10.645566] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:19.616 done. 00:08:19.617 00:40:11 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:19.617 00:40:11 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:19.617 00:40:11 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:19.617 00:40:11 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:19.617 00:40:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.617 00:40:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.617 ************************************ 00:08:19.617 START TEST nvme_reset 00:08:19.617 ************************************ 00:08:19.617 00:40:11 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:19.874 Initializing NVMe Controllers 00:08:19.874 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:19.874 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:19.874 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:19.874 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:19.875 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:19.875 00:08:19.875 real 0m0.192s 00:08:19.875 user 0m0.062s 00:08:19.875 sys 0m0.085s 00:08:19.875 ************************************ 00:08:19.875 00:40:11 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.875 00:40:11 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:19.875 END TEST nvme_reset 00:08:19.875 ************************************ 00:08:19.875 00:40:11 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:19.875 00:40:11 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.875 00:40:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.875 00:40:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.875 ************************************ 00:08:19.875 START TEST nvme_identify 00:08:19.875 ************************************ 00:08:19.875 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:19.875 00:40:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:19.875 00:40:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:19.875 00:40:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:19.875 00:40:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:19.875 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:19.875 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:19.875 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:19.875 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:19.875 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:20.136 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:20.136 00:40:11 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:20.136 00:40:11 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:20.136 ===================================================== 00:08:20.136 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.136 ===================================================== 00:08:20.136 Controller Capabilities/Features 00:08:20.136 ================================ 00:08:20.136 Vendor ID: 1b36 00:08:20.136 Subsystem Vendor ID: 1af4 00:08:20.136 Serial Number: 12340 00:08:20.136 Model Number: QEMU NVMe Ctrl 00:08:20.136 Firmware Version: 8.0.0 00:08:20.136 Recommended Arb Burst: 6 00:08:20.136 IEEE OUI Identifier: 00 54 52 00:08:20.136 Multi-path I/O 00:08:20.136 May have multiple subsystem ports: No 00:08:20.136 May have multiple controllers: No 00:08:20.136 Associated with SR-IOV VF: No 00:08:20.136 Max Data Transfer Size: 524288 00:08:20.136 Max Number of Namespaces: 256 00:08:20.136 Max Number of I/O Queues: 64 00:08:20.136 NVMe Specification Version (VS): 1.4 00:08:20.136 NVMe Specification Version (Identify): 1.4 00:08:20.136 Maximum Queue Entries: 2048 00:08:20.136 Contiguous Queues Required: Yes 00:08:20.136 Arbitration Mechanisms Supported 00:08:20.136 Weighted Round Robin: Not Supported 00:08:20.136 Vendor Specific: Not Supported 00:08:20.136 Reset Timeout: 7500 ms 00:08:20.136 Doorbell Stride: 4 bytes 00:08:20.136 NVM Subsystem Reset: Not Supported 00:08:20.136 Command Sets Supported 00:08:20.136 NVM Command Set: Supported 00:08:20.136 Boot Partition: Not Supported 00:08:20.136 Memory Page Size Minimum: 4096 bytes 00:08:20.136 Memory Page Size Maximum: 65536 bytes 00:08:20.136 Persistent Memory Region: Not Supported 00:08:20.136 Optional Asynchronous Events Supported 00:08:20.136 Namespace Attribute Notices: Supported 00:08:20.136 Firmware Activation Notices: Not Supported 00:08:20.136 ANA Change Notices: Not Supported 00:08:20.136 PLE Aggregate Log Change Notices: Not Supported 00:08:20.136 LBA Status Info Alert Notices: Not Supported 00:08:20.136 EGE Aggregate Log Change Notices: Not Supported 00:08:20.136 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.136 Zone Descriptor Change Notices: Not Supported 00:08:20.136 Discovery Log Change Notices: Not Supported 00:08:20.136 Controller Attributes 00:08:20.136 128-bit Host Identifier: Not Supported 00:08:20.136 Non-Operational Permissive Mode: Not Supported 00:08:20.136 NVM Sets: Not Supported 00:08:20.136 Read Recovery Levels: Not Supported 00:08:20.136 Endurance Groups: Not Supported 00:08:20.136 Predictable Latency Mode: Not Supported 00:08:20.136 Traffic Based Keep ALive: Not Supported 00:08:20.136 Namespace Granularity: Not Supported 00:08:20.136 SQ Associations: Not Supported 00:08:20.136 UUID List: Not Supported 00:08:20.136 Multi-Domain Subsystem: Not Supported 00:08:20.136 Fixed Capacity Management: Not Supported 00:08:20.136 Variable Capacity Management: Not Supported 00:08:20.136 Delete Endurance Group: Not Supported 00:08:20.136 Delete NVM Set: Not Supported 00:08:20.136 Extended LBA Formats Supported: Supported 00:08:20.136 Flexible Data Placement Supported: Not Supported 00:08:20.136 00:08:20.136 Controller Memory Buffer Support 00:08:20.136 ================================ 00:08:20.136 Supported: No 00:08:20.136 00:08:20.136 Persistent Memory Region Support 00:08:20.136 ================================ 00:08:20.136 Supported: No 00:08:20.136 00:08:20.136 Admin Command Set Attributes 00:08:20.136 ============================ 00:08:20.136 Security Send/Receive: Not Supported 00:08:20.136 Format NVM: Supported 00:08:20.136 Firmware Activate/Download: Not Supported 00:08:20.136 Namespace Management: Supported 00:08:20.136 Device Self-Test: Not Supported 00:08:20.136 Directives: Supported 00:08:20.136 NVMe-MI: Not Supported 00:08:20.136 Virtualization Management: Not Supported 00:08:20.136 Doorbell Buffer Config: Supported 00:08:20.136 Get LBA Status Capability: Not Supported 00:08:20.136 Command & Feature Lockdown Capability: Not Supported 00:08:20.136 Abort Command Limit: 4 00:08:20.136 Async Event Request Limit: 4 00:08:20.136 Number of Firmware Slots: N/A 00:08:20.136 Firmware Slot 1 Read-Only: N/A 00:08:20.136 Firmware Activation Without Reset: N/A 00:08:20.136 Multiple Update Detection Support: N/A 00:08:20.136 Firmware Update Granularity: No Information Provided 00:08:20.136 Per-Namespace SMART Log: Yes 00:08:20.136 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.136 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:20.136 Command Effects Log Page: Supported 00:08:20.136 Get Log Page Extended Data: Supported 00:08:20.136 Telemetry Log Pages: Not Supported 00:08:20.136 Persistent Event Log Pages: Not Supported 00:08:20.136 Supported Log Pages Log Page: May Support 00:08:20.136 Commands Supported & Effects Log Page: Not Supported 00:08:20.136 Feature Identifiers & Effects Log Page:May Support 00:08:20.136 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.136 Data Area 4 for Telemetry Log: Not Supported 00:08:20.136 Error Log Page Entries Supported: 1 00:08:20.136 Keep Alive: Not Supported 00:08:20.136 00:08:20.136 NVM Command Set Attributes 00:08:20.136 ========================== 00:08:20.136 Submission Queue Entry Size 00:08:20.136 Max: 64 00:08:20.136 Min: 64 00:08:20.136 Completion Queue Entry Size 00:08:20.136 Max: 16 00:08:20.136 Min: 16 00:08:20.136 Number of Namespaces: 256 00:08:20.136 Compare Command: Supported 00:08:20.136 Write Uncorrectable Command: Not Supported 00:08:20.136 Dataset Management Command: Supported 00:08:20.136 Write Zeroes Command: Supported 00:08:20.136 Set Features Save Field: Supported 00:08:20.136 Reservations: Not Supported 00:08:20.136 Timestamp: Supported 00:08:20.136 Copy: Supported 00:08:20.136 Volatile Write Cache: Present 00:08:20.136 Atomic Write Unit (Normal): 1 00:08:20.136 Atomic Write Unit (PFail): 1 00:08:20.136 Atomic Compare & Write Unit: 1 00:08:20.136 Fused Compare & Write: Not Supported 00:08:20.136 Scatter-Gather List 00:08:20.136 SGL Command Set: Supported 00:08:20.136 SGL Keyed: Not Supported 00:08:20.136 SGL Bit Bucket Descriptor: Not Supported 00:08:20.136 SGL Metadata Pointer: Not Supported 00:08:20.136 Oversized SGL: Not Supported 00:08:20.136 SGL Metadata Address: Not Supported 00:08:20.136 SGL Offset: Not Supported 00:08:20.136 Transport SGL Data Block: Not Supported 00:08:20.136 Replay Protected Memory Block: Not Supported 00:08:20.136 00:08:20.136 Firmware Slot Information 00:08:20.136 ========================= 00:08:20.136 Active slot: 1 00:08:20.136 Slot 1 Firmware Revision: 1.0 00:08:20.136 00:08:20.136 00:08:20.136 Commands Supported and Effects 00:08:20.136 ============================== 00:08:20.136 Admin Commands 00:08:20.136 -------------- 00:08:20.136 Delete I/O Submission Queue (00h): Supported 00:08:20.136 Create I/O Submission Queue (01h): Supported 00:08:20.136 Get Log Page (02h): Supported 00:08:20.136 Delete I/O Completion Queue (04h): Supported 00:08:20.136 Create I/O Completion Queue (05h): Supported 00:08:20.136 Identify (06h): Supported 00:08:20.136 Abort (08h): Supported 00:08:20.136 Set Features (09h): Supported 00:08:20.136 Get Features (0Ah): Supported 00:08:20.136 Asynchronous Event Request (0Ch): Supported 00:08:20.136 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.136 Directive Send (19h): Supported 00:08:20.136 Directive Receive (1Ah): Supported 00:08:20.136 Virtualization Management (1Ch): Supported 00:08:20.136 Doorbell Buffer Config (7Ch): Supported 00:08:20.136 Format NVM (80h): Supported LBA-Change 00:08:20.136 I/O Commands 00:08:20.136 ------------ 00:08:20.136 Flush (00h): Supported LBA-Change 00:08:20.136 Write (01h): Supported LBA-Change 00:08:20.136 Read (02h): Supported 00:08:20.136 Compare (05h): Supported 00:08:20.136 Write Zeroes (08h): Supported LBA-Change 00:08:20.136 Dataset Management (09h): Supported LBA-Change 00:08:20.136 Unknown (0Ch): Supported 00:08:20.136 Unknown (12h): Supported 00:08:20.136 Copy (19h): Supported LBA-Change 00:08:20.136 Unknown (1Dh): Supported LBA-Change 00:08:20.136 00:08:20.136 Error Log 00:08:20.136 ========= 00:08:20.136 00:08:20.136 Arbitration 00:08:20.136 =========== 00:08:20.136 Arbitration Burst: no limit 00:08:20.136 00:08:20.136 Power Management 00:08:20.136 ================ 00:08:20.137 Number of Power States: 1 00:08:20.137 Current Power State: Power State #0 00:08:20.137 Power State #0: 00:08:20.137 Max Power: 25.00 W 00:08:20.137 Non-Operational State: Operational 00:08:20.137 Entry Latency: 16 microseconds 00:08:20.137 Exit Latency: 4 microseconds 00:08:20.137 Relative Read Throughput: 0 00:08:20.137 Relative Read Latency: 0 00:08:20.137 Relative Write Throughput: 0 00:08:20.137 Relative Write Latency: 0 00:08:20.137 Idle Power: Not Reported 00:08:20.137 Active Power: Not Reported 00:08:20.137 Non-Operational Permissive Mode: Not Supported 00:08:20.137 00:08:20.137 Health Information 00:08:20.137 ================== 00:08:20.137 Critical Warnings: 00:08:20.137 Available Spare Space: OK 00:08:20.137 Temperature: OK 00:08:20.137 Device Reliability: OK 00:08:20.137 Read Only: No 00:08:20.137 Volatile Memory Backup: OK 00:08:20.137 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.137 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.137 Available Spare: 0% 00:08:20.137 Available Spare Threshold: 0% 00:08:20.137 Life Percentage Used: 0% 00:08:20.137 Data Units Read: 699 00:08:20.137 Data Units Written: 627 00:08:20.137 Host Read Commands: 33742 00:08:20.137 Host Write Commands: 33528 00:08:20.137 Controller Busy Time: 0 minutes 00:08:20.137 Power Cycles: 0 00:08:20.137 Power On Hours: 0 hours 00:08:20.137 Unsafe Shutdowns: 0 00:08:20.137 Unrecoverable Media Errors: 0 00:08:20.137 Lifetime Error Log Entries: 0 00:08:20.137 Warning Temperature Time: 0 minutes 00:08:20.137 Critical Temperature Time: 0 minutes 00:08:20.137 00:08:20.137 Number of Queues 00:08:20.137 ================ 00:08:20.137 Number of I/O Submission Queues: 64 00:08:20.137 Number of I/O Completion Queues: 64 00:08:20.137 00:08:20.137 ZNS Specific Controller Data 00:08:20.137 ============================ 00:08:20.137 Zone Append Size Limit: 0 00:08:20.137 00:08:20.137 00:08:20.137 Active Namespaces 00:08:20.137 ================= 00:08:20.137 Namespace ID:1 00:08:20.137 Error Recovery Timeout: Unlimited 00:08:20.137 Command Set Identifier: NVM (00h) 00:08:20.137 Deallocate: Supported 00:08:20.137 Deallocated/Unwritten Error: Supported 00:08:20.137 Deallocated Read Value: All 0x00 00:08:20.137 Deallocate in Write Zeroes: Not Supported 00:08:20.137 Deallocated Guard Field: 0xFFFF 00:08:20.137 Flush: Supported 00:08:20.137 Reservation: Not Supported 00:08:20.137 Metadata Transferred as: Separate Metadata Buffer 00:08:20.137 Namespace Sharing Capabilities: Private 00:08:20.137 Size (in LBAs): 1548666 (5GiB) 00:08:20.137 Capacity (in LBAs): 1548666 (5GiB) 00:08:20.137 Utilization (in LBAs): 1548666 (5GiB) 00:08:20.137 Thin Provisioning: Not Supported 00:08:20.137 Per-NS Atomic Units: No 00:08:20.137 Maximum Single Source Range Length: 128 00:08:20.137 Maximum Copy Length: 128 00:08:20.137 Maximum Source Range Count: 128 00:08:20.137 NGUID/EUI64 Never Reused: No 00:08:20.137 Namespace Write Protected: No 00:08:20.137 Number of LBA Formats: 8 00:08:20.137 Current LBA Format: LBA Format #07 00:08:20.137 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.137 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.137 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.137 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.137 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.137 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.137 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.137 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.137 00:08:20.137 NVM Specific Namespace Data 00:08:20.137 =========================== 00:08:20.137 Logical Block Storage Tag Mask: 0 00:08:20.137 Protection Information Capabilities: 00:08:20.137 16b Guard Protection Information Storage Tag Support: No 00:08:20.137 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.137 Storage Tag Check Read Support: No 00:08:20.137 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.137 ===================================================== 00:08:20.137 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.137 ===================================================== 00:08:20.137 Controller Capabilities/Features 00:08:20.137 ================================ 00:08:20.137 Vendor ID: 1b36 00:08:20.137 Subsystem Vendor ID: 1af4 00:08:20.137 Serial Number: 12341 00:08:20.137 Model Number: QEMU NVMe Ctrl 00:08:20.137 Firmware Version: 8.0.0 00:08:20.137 Recommended Arb Burst: 6 00:08:20.137 IEEE OUI Identifier: 00 54 52 00:08:20.137 Multi-path I/O 00:08:20.137 May have multiple subsystem ports: No 00:08:20.137 May have multiple controllers: No 00:08:20.137 Associated with SR-IOV VF: No 00:08:20.137 Max Data Transfer Size: 524288 00:08:20.137 Max Number of Namespaces: 256 00:08:20.137 Max Number of I/O Queues: 64 00:08:20.137 NVMe Specification Version (VS): 1.4 00:08:20.137 NVMe Specification Version (Identify): 1.4 00:08:20.137 Maximum Queue Entries: 2048 00:08:20.137 Contiguous Queues Required: Yes 00:08:20.137 Arbitration Mechanisms Supported 00:08:20.137 Weighted Round Robin: Not Supported 00:08:20.137 Vendor Specific: Not Supported 00:08:20.137 Reset Timeout: 7500 ms 00:08:20.137 Doorbell Stride: 4 bytes 00:08:20.137 NVM Subsystem Reset: Not Supported 00:08:20.137 Command Sets Supported 00:08:20.137 NVM Command Set: Supported 00:08:20.137 Boot Partition: Not Supported 00:08:20.137 Memory Page Size Minimum: 4096 bytes 00:08:20.137 Memory Page Size Maximum: 65536 bytes 00:08:20.137 Persistent Memory Region: Not Supported 00:08:20.137 Optional Asynchronous Events Supported 00:08:20.137 Namespace Attribute Notices: Supported 00:08:20.137 Firmware Activation Notices: Not Supported 00:08:20.137 ANA Change Notices: Not Supported 00:08:20.137 PLE Aggregate Log Change Notices: Not Supported 00:08:20.137 LBA Status Info Alert Notices: Not Supported 00:08:20.137 EGE Aggregate Log Change Notices: Not Supported 00:08:20.137 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.137 Zone Descriptor Change Notices: Not Supported 00:08:20.137 Discovery Log Change Notices: Not Supported 00:08:20.137 Controller Attributes 00:08:20.137 128-bit Host Identifier: Not Supported 00:08:20.137 Non-Operational Permissive Mode: Not Supported 00:08:20.137 NVM Sets: Not Supported 00:08:20.137 Read Recovery Levels: Not Supported 00:08:20.137 Endurance Groups: Not Supported 00:08:20.137 Predictable Latency Mode: Not Supported 00:08:20.137 Traffic Based Keep ALive: Not Supported 00:08:20.137 Namespace Granularity: Not Supported 00:08:20.137 SQ Associations: Not Supported 00:08:20.137 UUID List: Not Supported 00:08:20.137 Multi-Domain Subsystem: Not Supported 00:08:20.137 Fixed Capacity Management: Not Supported 00:08:20.137 Variable Capacity Management: Not Supported 00:08:20.137 Delete Endurance Group: Not Supported 00:08:20.137 Delete NVM Set: Not Supported 00:08:20.137 Extended LBA Formats Supported: Supported 00:08:20.137 Flexible Data Placement Supported: Not Supported 00:08:20.137 00:08:20.137 Controller Memory Buffer Support 00:08:20.137 ================================ 00:08:20.137 Supported: No 00:08:20.137 00:08:20.137 Persistent Memory Region Support 00:08:20.137 ================================ 00:08:20.137 Supported: No 00:08:20.137 00:08:20.137 Admin Command Set Attributes 00:08:20.137 ============================ 00:08:20.137 Security Send/Receive: Not Supported 00:08:20.137 Format NVM: Supported 00:08:20.137 Firmware Activate/Download: Not Supported 00:08:20.137 Namespace Management: Supported 00:08:20.137 Device Self-Test: Not Supported 00:08:20.137 Directives: Supported 00:08:20.137 NVMe-MI: Not Supported 00:08:20.137 Virtualization Management: Not Supported 00:08:20.137 Doorbell Buffer Config: Supported 00:08:20.137 Get LBA Status Capability: Not Supported 00:08:20.137 Command & Feature Lockdown Capability: Not Supported 00:08:20.137 Abort Command Limit: 4 00:08:20.138 Async Event Request Limit: 4 00:08:20.138 Number of Firmware Slots: N/A 00:08:20.138 Firmware Slot 1 Read-Only: N/A 00:08:20.138 Firmware Activation Without Reset: N/A 00:08:20.138 Multiple Update Detection Support: N/A 00:08:20.138 Firmware Update Granularity: No Information Provided 00:08:20.138 Per-Namespace SMART Log: Yes 00:08:20.138 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.138 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:20.138 Command Effects Log Page: Supported 00:08:20.138 Get Log Page Extended Data: Supported 00:08:20.138 Telemetry Log Pages: Not Supported 00:08:20.138 Persistent Event Log Pages: Not Supported 00:08:20.138 Supported Log Pages Log Page: May Support 00:08:20.138 Commands Supported & Effects Log Page: Not Supported 00:08:20.138 Feature Identifiers & Effects Log Page:May Support 00:08:20.138 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.138 Data Area 4 for Telemetry Log: Not Supported 00:08:20.138 Error Log Page Entries Supported: 1 00:08:20.138 Keep Alive: Not Supported 00:08:20.138 00:08:20.138 NVM Command Set Attributes 00:08:20.138 ========================== 00:08:20.138 Submission Queue Entry Size 00:08:20.138 Max: 64 00:08:20.138 Min: 64 00:08:20.138 Completion Queue Entry Size 00:08:20.138 Max: 16 00:08:20.138 Min: 16 00:08:20.138 Number of Namespaces: 256 00:08:20.138 Compare Command: Supported 00:08:20.138 Write Uncorrectable Command: Not Supported 00:08:20.138 Dataset Management Command: Supported 00:08:20.138 Write Zeroes Command: Supported 00:08:20.138 Set Features Save Field: Supported 00:08:20.138 Reservations: Not Supported 00:08:20.138 Timestamp: Supported 00:08:20.138 Copy: Supported 00:08:20.138 Volatile Write Cache: Present 00:08:20.138 Atomic Write Unit (Normal): 1 00:08:20.138 Atomic Write Unit (PFail): 1 00:08:20.138 Atomic Compare & Write Unit: 1 00:08:20.138 Fused Compare & Write: Not Supported 00:08:20.138 Scatter-Gather List 00:08:20.138 SGL Command Set: Supported 00:08:20.138 SGL Keyed: Not Supported 00:08:20.138 SGL Bit Bucket Descriptor: Not Supported 00:08:20.138 SGL Metadata Pointer: Not Supported 00:08:20.138 Oversized SGL: Not Supported 00:08:20.138 SGL Metadata Address: Not Supported 00:08:20.138 SGL Offset: Not Supported 00:08:20.138 Transport SGL Data Block: Not Supported 00:08:20.138 Replay Protected Memory Block: Not Supported 00:08:20.138 00:08:20.138 Firmware Slot Information 00:08:20.138 ========================= 00:08:20.138 Active slot: 1 00:08:20.138 Slot 1 Firmware Revision: 1.0 00:08:20.138 00:08:20.138 00:08:20.138 Commands Supported and Effects 00:08:20.138 ============================== 00:08:20.138 Admin Commands 00:08:20.138 -------------- 00:08:20.138 Delete I/O Submission Queue (00h): Supported 00:08:20.138 Create I/O Submission Queue (01h): Supported 00:08:20.138 Get Log Page (02h): Supported 00:08:20.138 Delete I/O Completion Queue (04h): Supported 00:08:20.138 Create I/O Completion Queue (05h): Supported 00:08:20.138 Identify (06h): Supported 00:08:20.138 Abort (08h): Supported 00:08:20.138 Set Features (09h): Supported 00:08:20.138 Get Features (0Ah): Supported 00:08:20.138 Asynchronous Event Request (0Ch): Supported 00:08:20.138 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.138 Directive Send (19h): Supported 00:08:20.138 Directive Receive (1Ah): Supported 00:08:20.138 Virtualization Management (1Ch): Supported 00:08:20.138 Doorbell Buffer Config (7Ch): Supported 00:08:20.138 Format NVM (80h): Supported LBA-Change 00:08:20.138 I/O Commands 00:08:20.138 ------------ 00:08:20.138 Flush (00h): Supported LBA-Change 00:08:20.138 Write (01h): Supported LBA-Change 00:08:20.138 Read (02h): Supported 00:08:20.138 Compare (05h): Supported 00:08:20.138 Write Zeroes (08h): Supported LBA-Change 00:08:20.138 Dataset Management (09h): Supported LBA-Change 00:08:20.138 Unknown (0Ch): Supported 00:08:20.138 Unknown (12h): Supported 00:08:20.138 Copy (19h): Supported LBA-Change 00:08:20.138 Unknown (1Dh): Supported LBA-Change 00:08:20.138 00:08:20.138 Error Log 00:08:20.138 ========= 00:08:20.138 00:08:20.138 Arbitration 00:08:20.138 =========== 00:08:20.138 Arbitration Burst: no limit 00:08:20.138 00:08:20.138 Power Management 00:08:20.138 ================ 00:08:20.138 Number of Power States: 1 00:08:20.138 Current Power State: Power State #0 00:08:20.138 Power State #0: 00:08:20.138 Max Power: 25.00 W 00:08:20.138 Non-Operational State: Operational 00:08:20.138 Entry Latency: 16 microseconds 00:08:20.138 Exit Latency: 4 microseconds 00:08:20.138 Relative Read Throughput: 0 00:08:20.138 Relative Read Latency: 0 00:08:20.138 Relative Write Throughput: 0 00:08:20.138 Relative Write Latency: 0 00:08:20.138 Idle Power: Not Reported 00:08:20.138 Active Power: Not Reported 00:08:20.138 Non-Operational Permissive Mode: Not Supported 00:08:20.138 00:08:20.138 Health Information 00:08:20.138 ================== 00:08:20.138 Critical Warnings: 00:08:20.138 Available Spare Space: OK 00:08:20.138 Temperature: [2024-11-17 00:40:12.115818] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75186 terminated unexpected 00:08:20.138 [2024-11-17 00:40:12.116854] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75186 terminated unexpected 00:08:20.138 [2024-11-17 00:40:12.117925] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75186 terminated unexpected 00:08:20.138 OK 00:08:20.138 Device Reliability: OK 00:08:20.138 Read Only: No 00:08:20.138 Volatile Memory Backup: OK 00:08:20.138 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.138 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.138 Available Spare: 0% 00:08:20.138 Available Spare Threshold: 0% 00:08:20.138 Life Percentage Used: 0% 00:08:20.138 Data Units Read: 1061 00:08:20.138 Data Units Written: 934 00:08:20.138 Host Read Commands: 50065 00:08:20.138 Host Write Commands: 48948 00:08:20.138 Controller Busy Time: 0 minutes 00:08:20.138 Power Cycles: 0 00:08:20.138 Power On Hours: 0 hours 00:08:20.138 Unsafe Shutdowns: 0 00:08:20.138 Unrecoverable Media Errors: 0 00:08:20.138 Lifetime Error Log Entries: 0 00:08:20.138 Warning Temperature Time: 0 minutes 00:08:20.138 Critical Temperature Time: 0 minutes 00:08:20.138 00:08:20.138 Number of Queues 00:08:20.138 ================ 00:08:20.138 Number of I/O Submission Queues: 64 00:08:20.138 Number of I/O Completion Queues: 64 00:08:20.138 00:08:20.138 ZNS Specific Controller Data 00:08:20.138 ============================ 00:08:20.138 Zone Append Size Limit: 0 00:08:20.138 00:08:20.138 00:08:20.138 Active Namespaces 00:08:20.138 ================= 00:08:20.138 Namespace ID:1 00:08:20.138 Error Recovery Timeout: Unlimited 00:08:20.138 Command Set Identifier: NVM (00h) 00:08:20.138 Deallocate: Supported 00:08:20.138 Deallocated/Unwritten Error: Supported 00:08:20.138 Deallocated Read Value: All 0x00 00:08:20.138 Deallocate in Write Zeroes: Not Supported 00:08:20.138 Deallocated Guard Field: 0xFFFF 00:08:20.138 Flush: Supported 00:08:20.138 Reservation: Not Supported 00:08:20.138 Namespace Sharing Capabilities: Private 00:08:20.138 Size (in LBAs): 1310720 (5GiB) 00:08:20.138 Capacity (in LBAs): 1310720 (5GiB) 00:08:20.138 Utilization (in LBAs): 1310720 (5GiB) 00:08:20.138 Thin Provisioning: Not Supported 00:08:20.138 Per-NS Atomic Units: No 00:08:20.138 Maximum Single Source Range Length: 128 00:08:20.138 Maximum Copy Length: 128 00:08:20.138 Maximum Source Range Count: 128 00:08:20.138 NGUID/EUI64 Never Reused: No 00:08:20.138 Namespace Write Protected: No 00:08:20.138 Number of LBA Formats: 8 00:08:20.138 Current LBA Format: LBA Format #04 00:08:20.138 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.138 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.138 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.138 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.138 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.138 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.138 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.138 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.138 00:08:20.138 NVM Specific Namespace Data 00:08:20.138 =========================== 00:08:20.138 Logical Block Storage Tag Mask: 0 00:08:20.138 Protection Information Capabilities: 00:08:20.138 16b Guard Protection Information Storage Tag Support: No 00:08:20.138 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.138 Storage Tag Check Read Support: No 00:08:20.138 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.138 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.138 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.139 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.139 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.139 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.139 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.139 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.139 ===================================================== 00:08:20.139 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.139 ===================================================== 00:08:20.139 Controller Capabilities/Features 00:08:20.139 ================================ 00:08:20.139 Vendor ID: 1b36 00:08:20.139 Subsystem Vendor ID: 1af4 00:08:20.139 Serial Number: 12343 00:08:20.139 Model Number: QEMU NVMe Ctrl 00:08:20.139 Firmware Version: 8.0.0 00:08:20.139 Recommended Arb Burst: 6 00:08:20.139 IEEE OUI Identifier: 00 54 52 00:08:20.139 Multi-path I/O 00:08:20.139 May have multiple subsystem ports: No 00:08:20.139 May have multiple controllers: Yes 00:08:20.139 Associated with SR-IOV VF: No 00:08:20.139 Max Data Transfer Size: 524288 00:08:20.139 Max Number of Namespaces: 256 00:08:20.139 Max Number of I/O Queues: 64 00:08:20.139 NVMe Specification Version (VS): 1.4 00:08:20.139 NVMe Specification Version (Identify): 1.4 00:08:20.139 Maximum Queue Entries: 2048 00:08:20.139 Contiguous Queues Required: Yes 00:08:20.139 Arbitration Mechanisms Supported 00:08:20.139 Weighted Round Robin: Not Supported 00:08:20.139 Vendor Specific: Not Supported 00:08:20.139 Reset Timeout: 7500 ms 00:08:20.139 Doorbell Stride: 4 bytes 00:08:20.139 NVM Subsystem Reset: Not Supported 00:08:20.139 Command Sets Supported 00:08:20.139 NVM Command Set: Supported 00:08:20.139 Boot Partition: Not Supported 00:08:20.139 Memory Page Size Minimum: 4096 bytes 00:08:20.139 Memory Page Size Maximum: 65536 bytes 00:08:20.139 Persistent Memory Region: Not Supported 00:08:20.139 Optional Asynchronous Events Supported 00:08:20.139 Namespace Attribute Notices: Supported 00:08:20.139 Firmware Activation Notices: Not Supported 00:08:20.139 ANA Change Notices: Not Supported 00:08:20.139 PLE Aggregate Log Change Notices: Not Supported 00:08:20.139 LBA Status Info Alert Notices: Not Supported 00:08:20.139 EGE Aggregate Log Change Notices: Not Supported 00:08:20.139 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.139 Zone Descriptor Change Notices: Not Supported 00:08:20.139 Discovery Log Change Notices: Not Supported 00:08:20.139 Controller Attributes 00:08:20.139 128-bit Host Identifier: Not Supported 00:08:20.139 Non-Operational Permissive Mode: Not Supported 00:08:20.139 NVM Sets: Not Supported 00:08:20.139 Read Recovery Levels: Not Supported 00:08:20.139 Endurance Groups: Supported 00:08:20.139 Predictable Latency Mode: Not Supported 00:08:20.139 Traffic Based Keep ALive: Not Supported 00:08:20.139 Namespace Granularity: Not Supported 00:08:20.139 SQ Associations: Not Supported 00:08:20.139 UUID List: Not Supported 00:08:20.139 Multi-Domain Subsystem: Not Supported 00:08:20.139 Fixed Capacity Management: Not Supported 00:08:20.139 Variable Capacity Management: Not Supported 00:08:20.139 Delete Endurance Group: Not Supported 00:08:20.139 Delete NVM Set: Not Supported 00:08:20.139 Extended LBA Formats Supported: Supported 00:08:20.139 Flexible Data Placement Supported: Supported 00:08:20.139 00:08:20.139 Controller Memory Buffer Support 00:08:20.139 ================================ 00:08:20.139 Supported: No 00:08:20.139 00:08:20.139 Persistent Memory Region Support 00:08:20.139 ================================ 00:08:20.139 Supported: No 00:08:20.139 00:08:20.139 Admin Command Set Attributes 00:08:20.139 ============================ 00:08:20.139 Security Send/Receive: Not Supported 00:08:20.139 Format NVM: Supported 00:08:20.139 Firmware Activate/Download: Not Supported 00:08:20.139 Namespace Management: Supported 00:08:20.139 Device Self-Test: Not Supported 00:08:20.139 Directives: Supported 00:08:20.139 NVMe-MI: Not Supported 00:08:20.139 Virtualization Management: Not Supported 00:08:20.139 Doorbell Buffer Config: Supported 00:08:20.139 Get LBA Status Capability: Not Supported 00:08:20.139 Command & Feature Lockdown Capability: Not Supported 00:08:20.139 Abort Command Limit: 4 00:08:20.139 Async Event Request Limit: 4 00:08:20.139 Number of Firmware Slots: N/A 00:08:20.139 Firmware Slot 1 Read-Only: N/A 00:08:20.139 Firmware Activation Without Reset: N/A 00:08:20.139 Multiple Update Detection Support: N/A 00:08:20.139 Firmware Update Granularity: No Information Provided 00:08:20.139 Per-Namespace SMART Log: Yes 00:08:20.139 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.139 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:20.139 Command Effects Log Page: Supported 00:08:20.139 Get Log Page Extended Data: Supported 00:08:20.139 Telemetry Log Pages: Not Supported 00:08:20.139 Persistent Event Log Pages: Not Supported 00:08:20.139 Supported Log Pages Log Page: May Support 00:08:20.139 Commands Supported & Effects Log Page: Not Supported 00:08:20.139 Feature Identifiers & Effects Log Page:May Support 00:08:20.139 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.139 Data Area 4 for Telemetry Log: Not Supported 00:08:20.139 Error Log Page Entries Supported: 1 00:08:20.139 Keep Alive: Not Supported 00:08:20.139 00:08:20.139 NVM Command Set Attributes 00:08:20.139 ========================== 00:08:20.139 Submission Queue Entry Size 00:08:20.139 Max: 64 00:08:20.139 Min: 64 00:08:20.139 Completion Queue Entry Size 00:08:20.139 Max: 16 00:08:20.139 Min: 16 00:08:20.139 Number of Namespaces: 256 00:08:20.139 Compare Command: Supported 00:08:20.139 Write Uncorrectable Command: Not Supported 00:08:20.139 Dataset Management Command: Supported 00:08:20.139 Write Zeroes Command: Supported 00:08:20.139 Set Features Save Field: Supported 00:08:20.139 Reservations: Not Supported 00:08:20.139 Timestamp: Supported 00:08:20.139 Copy: Supported 00:08:20.139 Volatile Write Cache: Present 00:08:20.139 Atomic Write Unit (Normal): 1 00:08:20.139 Atomic Write Unit (PFail): 1 00:08:20.139 Atomic Compare & Write Unit: 1 00:08:20.139 Fused Compare & Write: Not Supported 00:08:20.139 Scatter-Gather List 00:08:20.139 SGL Command Set: Supported 00:08:20.139 SGL Keyed: Not Supported 00:08:20.139 SGL Bit Bucket Descriptor: Not Supported 00:08:20.139 SGL Metadata Pointer: Not Supported 00:08:20.139 Oversized SGL: Not Supported 00:08:20.139 SGL Metadata Address: Not Supported 00:08:20.139 SGL Offset: Not Supported 00:08:20.139 Transport SGL Data Block: Not Supported 00:08:20.139 Replay Protected Memory Block: Not Supported 00:08:20.139 00:08:20.139 Firmware Slot Information 00:08:20.139 ========================= 00:08:20.139 Active slot: 1 00:08:20.139 Slot 1 Firmware Revision: 1.0 00:08:20.139 00:08:20.139 00:08:20.139 Commands Supported and Effects 00:08:20.139 ============================== 00:08:20.139 Admin Commands 00:08:20.139 -------------- 00:08:20.139 Delete I/O Submission Queue (00h): Supported 00:08:20.139 Create I/O Submission Queue (01h): Supported 00:08:20.139 Get Log Page (02h): Supported 00:08:20.139 Delete I/O Completion Queue (04h): Supported 00:08:20.139 Create I/O Completion Queue (05h): Supported 00:08:20.139 Identify (06h): Supported 00:08:20.139 Abort (08h): Supported 00:08:20.139 Set Features (09h): Supported 00:08:20.139 Get Features (0Ah): Supported 00:08:20.139 Asynchronous Event Request (0Ch): Supported 00:08:20.139 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.139 Directive Send (19h): Supported 00:08:20.139 Directive Receive (1Ah): Supported 00:08:20.139 Virtualization Management (1Ch): Supported 00:08:20.139 Doorbell Buffer Config (7Ch): Supported 00:08:20.139 Format NVM (80h): Supported LBA-Change 00:08:20.139 I/O Commands 00:08:20.139 ------------ 00:08:20.139 Flush (00h): Supported LBA-Change 00:08:20.139 Write (01h): Supported LBA-Change 00:08:20.139 Read (02h): Supported 00:08:20.139 Compare (05h): Supported 00:08:20.139 Write Zeroes (08h): Supported LBA-Change 00:08:20.139 Dataset Management (09h): Supported LBA-Change 00:08:20.139 Unknown (0Ch): Supported 00:08:20.139 Unknown (12h): Supported 00:08:20.139 Copy (19h): Supported LBA-Change 00:08:20.139 Unknown (1Dh): Supported LBA-Change 00:08:20.139 00:08:20.139 Error Log 00:08:20.139 ========= 00:08:20.139 00:08:20.139 Arbitration 00:08:20.139 =========== 00:08:20.139 Arbitration Burst: no limit 00:08:20.139 00:08:20.139 Power Management 00:08:20.140 ================ 00:08:20.140 Number of Power States: 1 00:08:20.140 Current Power State: Power State #0 00:08:20.140 Power State #0: 00:08:20.140 Max Power: 25.00 W 00:08:20.140 Non-Operational State: Operational 00:08:20.140 Entry Latency: 16 microseconds 00:08:20.140 Exit Latency: 4 microseconds 00:08:20.140 Relative Read Throughput: 0 00:08:20.140 Relative Read Latency: 0 00:08:20.140 Relative Write Throughput: 0 00:08:20.140 Relative Write Latency: 0 00:08:20.140 Idle Power: Not Reported 00:08:20.140 Active Power: Not Reported 00:08:20.140 Non-Operational Permissive Mode: Not Supported 00:08:20.140 00:08:20.140 Health Information 00:08:20.140 ================== 00:08:20.140 Critical Warnings: 00:08:20.140 Available Spare Space: OK 00:08:20.140 Temperature: OK 00:08:20.140 Device Reliability: OK 00:08:20.140 Read Only: No 00:08:20.140 Volatile Memory Backup: OK 00:08:20.140 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.140 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.140 Available Spare: 0% 00:08:20.140 Available Spare Threshold: 0% 00:08:20.140 Life Percentage Used: 0% 00:08:20.140 Data Units Read: 850 00:08:20.140 Data Units Written: 779 00:08:20.140 Host Read Commands: 35097 00:08:20.140 Host Write Commands: 34520 00:08:20.140 Controller Busy Time: 0 minutes 00:08:20.140 Power Cycles: 0 00:08:20.140 Power On Hours: 0 hours 00:08:20.140 Unsafe Shutdowns: 0 00:08:20.140 Unrecoverable Media Errors: 0 00:08:20.140 Lifetime Error Log Entries: 0 00:08:20.140 Warning Temperature Time: 0 minutes 00:08:20.140 Critical Temperature Time: 0 minutes 00:08:20.140 00:08:20.140 Number of Queues 00:08:20.140 ================ 00:08:20.140 Number of I/O Submission Queues: 64 00:08:20.140 Number of I/O Completion Queues: 64 00:08:20.140 00:08:20.140 ZNS Specific Controller Data 00:08:20.140 ============================ 00:08:20.140 Zone Append Size Limit: 0 00:08:20.140 00:08:20.140 00:08:20.140 Active Namespaces 00:08:20.140 ================= 00:08:20.140 Namespace ID:1 00:08:20.140 Error Recovery Timeout: Unlimited 00:08:20.140 Command Set Identifier: NVM (00h) 00:08:20.140 Deallocate: Supported 00:08:20.140 Deallocated/Unwritten Error: Supported 00:08:20.140 Deallocated Read Value: All 0x00 00:08:20.140 Deallocate in Write Zeroes: Not Supported 00:08:20.140 Deallocated Guard Field: 0xFFFF 00:08:20.140 Flush: Supported 00:08:20.140 Reservation: Not Supported 00:08:20.140 Namespace Sharing Capabilities: Multiple Controllers 00:08:20.140 Size (in LBAs): 262144 (1GiB) 00:08:20.140 Capacity (in LBAs): 262144 (1GiB) 00:08:20.140 Utilization (in LBAs): 262144 (1GiB) 00:08:20.140 Thin Provisioning: Not Supported 00:08:20.140 Per-NS Atomic Units: No 00:08:20.140 Maximum Single Source Range Length: 128 00:08:20.140 Maximum Copy Length: 128 00:08:20.140 Maximum Source Range Count: 128 00:08:20.140 NGUID/EUI64 Never Reused: No 00:08:20.140 Namespace Write Protected: No 00:08:20.140 Endurance group ID: 1 00:08:20.140 Number of LBA Formats: 8 00:08:20.140 Current LBA Format: LBA Format #04 00:08:20.140 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.140 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.140 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.140 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.140 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.140 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.140 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.140 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.140 00:08:20.140 Get Feature FDP: 00:08:20.140 ================ 00:08:20.140 Enabled: Yes 00:08:20.140 FDP configuration index: 0 00:08:20.140 00:08:20.140 FDP configurations log page 00:08:20.140 =========================== 00:08:20.140 Number of FDP configurations: 1 00:08:20.140 Version: 0 00:08:20.140 Size: 112 00:08:20.140 FDP Configuration Descriptor: 0 00:08:20.140 Descriptor Size: 96 00:08:20.140 Reclaim Group Identifier format: 2 00:08:20.140 FDP Volatile Write Cache: Not Present 00:08:20.140 FDP Configuration: Valid 00:08:20.140 Vendor Specific Size: 0 00:08:20.140 Number of Reclaim Groups: 2 00:08:20.140 Number of Recalim Unit Handles: 8 00:08:20.140 Max Placement Identifiers: 128 00:08:20.140 Number of Namespaces Suppprted: 256 00:08:20.140 Reclaim unit Nominal Size: 6000000 bytes 00:08:20.140 Estimated Reclaim Unit Time Limit: Not Reported 00:08:20.140 RUH Desc #000: RUH Type: Initially Isolated 00:08:20.140 RUH Desc #001: RUH Type: Initially Isolated 00:08:20.140 RUH Desc #002: RUH Type: Initially Isolated 00:08:20.140 RUH Desc #003: RUH Type: Initially Isolated 00:08:20.140 RUH Desc #004: RUH Type: Initially Isolated 00:08:20.140 RUH Desc #005: RUH Type: Initially Isolated 00:08:20.140 RUH Desc #006: RUH Type: Initially Isolated 00:08:20.140 RUH Desc #007: RUH Type: Initially Isolated 00:08:20.140 00:08:20.140 FDP reclaim unit handle usage log page 00:08:20.140 ====================================== 00:08:20.140 Number of Reclaim Unit Handles: 8 00:08:20.140 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:20.140 RUH Usage Desc #001: RUH Attributes: Unused 00:08:20.140 RUH Usage Desc #002: RUH Attributes: Unused 00:08:20.140 RUH Usage Desc #003: RUH Attributes: Unused 00:08:20.140 RUH Usage Desc #004: RUH Attributes: Unused 00:08:20.140 RUH Usage Desc #005: RUH Attributes: Unused 00:08:20.140 RUH Usage Desc #006: RUH Attributes: Unused 00:08:20.140 RUH Usage Desc #007: RUH Attributes: Unused 00:08:20.140 00:08:20.140 FDP statistics log page 00:08:20.140 ======================= 00:08:20.140 Host bytes with metadata written: 456761344 00:08:20.140 Medi[2024-11-17 00:40:12.119576] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75186 terminated unexpected 00:08:20.140 a bytes with metadata written: 456835072 00:08:20.140 Media bytes erased: 0 00:08:20.140 00:08:20.140 FDP events log page 00:08:20.140 =================== 00:08:20.140 Number of FDP events: 0 00:08:20.140 00:08:20.140 NVM Specific Namespace Data 00:08:20.140 =========================== 00:08:20.140 Logical Block Storage Tag Mask: 0 00:08:20.140 Protection Information Capabilities: 00:08:20.140 16b Guard Protection Information Storage Tag Support: No 00:08:20.140 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.140 Storage Tag Check Read Support: No 00:08:20.140 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.140 ===================================================== 00:08:20.140 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.140 ===================================================== 00:08:20.140 Controller Capabilities/Features 00:08:20.140 ================================ 00:08:20.140 Vendor ID: 1b36 00:08:20.140 Subsystem Vendor ID: 1af4 00:08:20.140 Serial Number: 12342 00:08:20.140 Model Number: QEMU NVMe Ctrl 00:08:20.140 Firmware Version: 8.0.0 00:08:20.140 Recommended Arb Burst: 6 00:08:20.140 IEEE OUI Identifier: 00 54 52 00:08:20.140 Multi-path I/O 00:08:20.140 May have multiple subsystem ports: No 00:08:20.140 May have multiple controllers: No 00:08:20.140 Associated with SR-IOV VF: No 00:08:20.140 Max Data Transfer Size: 524288 00:08:20.140 Max Number of Namespaces: 256 00:08:20.140 Max Number of I/O Queues: 64 00:08:20.140 NVMe Specification Version (VS): 1.4 00:08:20.140 NVMe Specification Version (Identify): 1.4 00:08:20.140 Maximum Queue Entries: 2048 00:08:20.140 Contiguous Queues Required: Yes 00:08:20.141 Arbitration Mechanisms Supported 00:08:20.141 Weighted Round Robin: Not Supported 00:08:20.141 Vendor Specific: Not Supported 00:08:20.141 Reset Timeout: 7500 ms 00:08:20.141 Doorbell Stride: 4 bytes 00:08:20.141 NVM Subsystem Reset: Not Supported 00:08:20.141 Command Sets Supported 00:08:20.141 NVM Command Set: Supported 00:08:20.141 Boot Partition: Not Supported 00:08:20.141 Memory Page Size Minimum: 4096 bytes 00:08:20.141 Memory Page Size Maximum: 65536 bytes 00:08:20.141 Persistent Memory Region: Not Supported 00:08:20.141 Optional Asynchronous Events Supported 00:08:20.141 Namespace Attribute Notices: Supported 00:08:20.141 Firmware Activation Notices: Not Supported 00:08:20.141 ANA Change Notices: Not Supported 00:08:20.141 PLE Aggregate Log Change Notices: Not Supported 00:08:20.141 LBA Status Info Alert Notices: Not Supported 00:08:20.141 EGE Aggregate Log Change Notices: Not Supported 00:08:20.141 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.141 Zone Descriptor Change Notices: Not Supported 00:08:20.141 Discovery Log Change Notices: Not Supported 00:08:20.141 Controller Attributes 00:08:20.141 128-bit Host Identifier: Not Supported 00:08:20.141 Non-Operational Permissive Mode: Not Supported 00:08:20.141 NVM Sets: Not Supported 00:08:20.141 Read Recovery Levels: Not Supported 00:08:20.141 Endurance Groups: Not Supported 00:08:20.141 Predictable Latency Mode: Not Supported 00:08:20.141 Traffic Based Keep ALive: Not Supported 00:08:20.141 Namespace Granularity: Not Supported 00:08:20.141 SQ Associations: Not Supported 00:08:20.141 UUID List: Not Supported 00:08:20.141 Multi-Domain Subsystem: Not Supported 00:08:20.141 Fixed Capacity Management: Not Supported 00:08:20.141 Variable Capacity Management: Not Supported 00:08:20.141 Delete Endurance Group: Not Supported 00:08:20.141 Delete NVM Set: Not Supported 00:08:20.141 Extended LBA Formats Supported: Supported 00:08:20.141 Flexible Data Placement Supported: Not Supported 00:08:20.141 00:08:20.141 Controller Memory Buffer Support 00:08:20.141 ================================ 00:08:20.141 Supported: No 00:08:20.141 00:08:20.141 Persistent Memory Region Support 00:08:20.141 ================================ 00:08:20.141 Supported: No 00:08:20.141 00:08:20.141 Admin Command Set Attributes 00:08:20.141 ============================ 00:08:20.141 Security Send/Receive: Not Supported 00:08:20.141 Format NVM: Supported 00:08:20.141 Firmware Activate/Download: Not Supported 00:08:20.141 Namespace Management: Supported 00:08:20.141 Device Self-Test: Not Supported 00:08:20.141 Directives: Supported 00:08:20.141 NVMe-MI: Not Supported 00:08:20.141 Virtualization Management: Not Supported 00:08:20.141 Doorbell Buffer Config: Supported 00:08:20.141 Get LBA Status Capability: Not Supported 00:08:20.141 Command & Feature Lockdown Capability: Not Supported 00:08:20.141 Abort Command Limit: 4 00:08:20.141 Async Event Request Limit: 4 00:08:20.141 Number of Firmware Slots: N/A 00:08:20.141 Firmware Slot 1 Read-Only: N/A 00:08:20.141 Firmware Activation Without Reset: N/A 00:08:20.141 Multiple Update Detection Support: N/A 00:08:20.141 Firmware Update Granularity: No Information Provided 00:08:20.141 Per-Namespace SMART Log: Yes 00:08:20.141 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.141 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:20.141 Command Effects Log Page: Supported 00:08:20.141 Get Log Page Extended Data: Supported 00:08:20.141 Telemetry Log Pages: Not Supported 00:08:20.141 Persistent Event Log Pages: Not Supported 00:08:20.141 Supported Log Pages Log Page: May Support 00:08:20.141 Commands Supported & Effects Log Page: Not Supported 00:08:20.141 Feature Identifiers & Effects Log Page:May Support 00:08:20.141 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.141 Data Area 4 for Telemetry Log: Not Supported 00:08:20.141 Error Log Page Entries Supported: 1 00:08:20.141 Keep Alive: Not Supported 00:08:20.141 00:08:20.141 NVM Command Set Attributes 00:08:20.141 ========================== 00:08:20.141 Submission Queue Entry Size 00:08:20.141 Max: 64 00:08:20.141 Min: 64 00:08:20.141 Completion Queue Entry Size 00:08:20.141 Max: 16 00:08:20.141 Min: 16 00:08:20.141 Number of Namespaces: 256 00:08:20.141 Compare Command: Supported 00:08:20.141 Write Uncorrectable Command: Not Supported 00:08:20.141 Dataset Management Command: Supported 00:08:20.141 Write Zeroes Command: Supported 00:08:20.141 Set Features Save Field: Supported 00:08:20.141 Reservations: Not Supported 00:08:20.141 Timestamp: Supported 00:08:20.141 Copy: Supported 00:08:20.141 Volatile Write Cache: Present 00:08:20.141 Atomic Write Unit (Normal): 1 00:08:20.141 Atomic Write Unit (PFail): 1 00:08:20.141 Atomic Compare & Write Unit: 1 00:08:20.141 Fused Compare & Write: Not Supported 00:08:20.141 Scatter-Gather List 00:08:20.141 SGL Command Set: Supported 00:08:20.141 SGL Keyed: Not Supported 00:08:20.141 SGL Bit Bucket Descriptor: Not Supported 00:08:20.141 SGL Metadata Pointer: Not Supported 00:08:20.141 Oversized SGL: Not Supported 00:08:20.141 SGL Metadata Address: Not Supported 00:08:20.141 SGL Offset: Not Supported 00:08:20.141 Transport SGL Data Block: Not Supported 00:08:20.141 Replay Protected Memory Block: Not Supported 00:08:20.141 00:08:20.141 Firmware Slot Information 00:08:20.141 ========================= 00:08:20.141 Active slot: 1 00:08:20.141 Slot 1 Firmware Revision: 1.0 00:08:20.141 00:08:20.141 00:08:20.141 Commands Supported and Effects 00:08:20.141 ============================== 00:08:20.141 Admin Commands 00:08:20.141 -------------- 00:08:20.141 Delete I/O Submission Queue (00h): Supported 00:08:20.141 Create I/O Submission Queue (01h): Supported 00:08:20.141 Get Log Page (02h): Supported 00:08:20.141 Delete I/O Completion Queue (04h): Supported 00:08:20.141 Create I/O Completion Queue (05h): Supported 00:08:20.141 Identify (06h): Supported 00:08:20.141 Abort (08h): Supported 00:08:20.141 Set Features (09h): Supported 00:08:20.141 Get Features (0Ah): Supported 00:08:20.141 Asynchronous Event Request (0Ch): Supported 00:08:20.141 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.141 Directive Send (19h): Supported 00:08:20.141 Directive Receive (1Ah): Supported 00:08:20.141 Virtualization Management (1Ch): Supported 00:08:20.141 Doorbell Buffer Config (7Ch): Supported 00:08:20.141 Format NVM (80h): Supported LBA-Change 00:08:20.141 I/O Commands 00:08:20.141 ------------ 00:08:20.141 Flush (00h): Supported LBA-Change 00:08:20.141 Write (01h): Supported LBA-Change 00:08:20.141 Read (02h): Supported 00:08:20.141 Compare (05h): Supported 00:08:20.141 Write Zeroes (08h): Supported LBA-Change 00:08:20.141 Dataset Management (09h): Supported LBA-Change 00:08:20.141 Unknown (0Ch): Supported 00:08:20.141 Unknown (12h): Supported 00:08:20.141 Copy (19h): Supported LBA-Change 00:08:20.141 Unknown (1Dh): Supported LBA-Change 00:08:20.141 00:08:20.141 Error Log 00:08:20.141 ========= 00:08:20.141 00:08:20.141 Arbitration 00:08:20.141 =========== 00:08:20.141 Arbitration Burst: no limit 00:08:20.141 00:08:20.141 Power Management 00:08:20.141 ================ 00:08:20.141 Number of Power States: 1 00:08:20.141 Current Power State: Power State #0 00:08:20.141 Power State #0: 00:08:20.141 Max Power: 25.00 W 00:08:20.141 Non-Operational State: Operational 00:08:20.141 Entry Latency: 16 microseconds 00:08:20.141 Exit Latency: 4 microseconds 00:08:20.141 Relative Read Throughput: 0 00:08:20.141 Relative Read Latency: 0 00:08:20.141 Relative Write Throughput: 0 00:08:20.141 Relative Write Latency: 0 00:08:20.141 Idle Power: Not Reported 00:08:20.141 Active Power: Not Reported 00:08:20.141 Non-Operational Permissive Mode: Not Supported 00:08:20.141 00:08:20.141 Health Information 00:08:20.141 ================== 00:08:20.141 Critical Warnings: 00:08:20.141 Available Spare Space: OK 00:08:20.141 Temperature: OK 00:08:20.142 Device Reliability: OK 00:08:20.142 Read Only: No 00:08:20.142 Volatile Memory Backup: OK 00:08:20.142 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.142 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.142 Available Spare: 0% 00:08:20.142 Available Spare Threshold: 0% 00:08:20.142 Life Percentage Used: 0% 00:08:20.142 Data Units Read: 2222 00:08:20.142 Data Units Written: 2009 00:08:20.142 Host Read Commands: 102740 00:08:20.142 Host Write Commands: 101009 00:08:20.142 Controller Busy Time: 0 minutes 00:08:20.142 Power Cycles: 0 00:08:20.142 Power On Hours: 0 hours 00:08:20.142 Unsafe Shutdowns: 0 00:08:20.142 Unrecoverable Media Errors: 0 00:08:20.142 Lifetime Error Log Entries: 0 00:08:20.142 Warning Temperature Time: 0 minutes 00:08:20.142 Critical Temperature Time: 0 minutes 00:08:20.142 00:08:20.142 Number of Queues 00:08:20.142 ================ 00:08:20.142 Number of I/O Submission Queues: 64 00:08:20.142 Number of I/O Completion Queues: 64 00:08:20.142 00:08:20.142 ZNS Specific Controller Data 00:08:20.142 ============================ 00:08:20.142 Zone Append Size Limit: 0 00:08:20.142 00:08:20.142 00:08:20.142 Active Namespaces 00:08:20.142 ================= 00:08:20.142 Namespace ID:1 00:08:20.142 Error Recovery Timeout: Unlimited 00:08:20.142 Command Set Identifier: NVM (00h) 00:08:20.142 Deallocate: Supported 00:08:20.142 Deallocated/Unwritten Error: Supported 00:08:20.142 Deallocated Read Value: All 0x00 00:08:20.142 Deallocate in Write Zeroes: Not Supported 00:08:20.142 Deallocated Guard Field: 0xFFFF 00:08:20.142 Flush: Supported 00:08:20.142 Reservation: Not Supported 00:08:20.142 Namespace Sharing Capabilities: Private 00:08:20.142 Size (in LBAs): 1048576 (4GiB) 00:08:20.142 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.142 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.142 Thin Provisioning: Not Supported 00:08:20.142 Per-NS Atomic Units: No 00:08:20.142 Maximum Single Source Range Length: 128 00:08:20.142 Maximum Copy Length: 128 00:08:20.142 Maximum Source Range Count: 128 00:08:20.142 NGUID/EUI64 Never Reused: No 00:08:20.142 Namespace Write Protected: No 00:08:20.142 Number of LBA Formats: 8 00:08:20.142 Current LBA Format: LBA Format #04 00:08:20.142 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.142 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.142 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.142 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.142 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.142 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.142 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.142 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.142 00:08:20.142 NVM Specific Namespace Data 00:08:20.142 =========================== 00:08:20.142 Logical Block Storage Tag Mask: 0 00:08:20.142 Protection Information Capabilities: 00:08:20.142 16b Guard Protection Information Storage Tag Support: No 00:08:20.142 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.142 Storage Tag Check Read Support: No 00:08:20.142 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Namespace ID:2 00:08:20.142 Error Recovery Timeout: Unlimited 00:08:20.142 Command Set Identifier: NVM (00h) 00:08:20.142 Deallocate: Supported 00:08:20.142 Deallocated/Unwritten Error: Supported 00:08:20.142 Deallocated Read Value: All 0x00 00:08:20.142 Deallocate in Write Zeroes: Not Supported 00:08:20.142 Deallocated Guard Field: 0xFFFF 00:08:20.142 Flush: Supported 00:08:20.142 Reservation: Not Supported 00:08:20.142 Namespace Sharing Capabilities: Private 00:08:20.142 Size (in LBAs): 1048576 (4GiB) 00:08:20.142 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.142 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.142 Thin Provisioning: Not Supported 00:08:20.142 Per-NS Atomic Units: No 00:08:20.142 Maximum Single Source Range Length: 128 00:08:20.142 Maximum Copy Length: 128 00:08:20.142 Maximum Source Range Count: 128 00:08:20.142 NGUID/EUI64 Never Reused: No 00:08:20.142 Namespace Write Protected: No 00:08:20.142 Number of LBA Formats: 8 00:08:20.142 Current LBA Format: LBA Format #04 00:08:20.142 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.142 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.142 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.142 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.142 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.142 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.142 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.142 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.142 00:08:20.142 NVM Specific Namespace Data 00:08:20.142 =========================== 00:08:20.142 Logical Block Storage Tag Mask: 0 00:08:20.142 Protection Information Capabilities: 00:08:20.142 16b Guard Protection Information Storage Tag Support: No 00:08:20.142 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.142 Storage Tag Check Read Support: No 00:08:20.142 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Namespace ID:3 00:08:20.142 Error Recovery Timeout: Unlimited 00:08:20.142 Command Set Identifier: NVM (00h) 00:08:20.142 Deallocate: Supported 00:08:20.142 Deallocated/Unwritten Error: Supported 00:08:20.142 Deallocated Read Value: All 0x00 00:08:20.142 Deallocate in Write Zeroes: Not Supported 00:08:20.142 Deallocated Guard Field: 0xFFFF 00:08:20.142 Flush: Supported 00:08:20.142 Reservation: Not Supported 00:08:20.142 Namespace Sharing Capabilities: Private 00:08:20.142 Size (in LBAs): 1048576 (4GiB) 00:08:20.142 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.142 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.142 Thin Provisioning: Not Supported 00:08:20.142 Per-NS Atomic Units: No 00:08:20.142 Maximum Single Source Range Length: 128 00:08:20.142 Maximum Copy Length: 128 00:08:20.142 Maximum Source Range Count: 128 00:08:20.142 NGUID/EUI64 Never Reused: No 00:08:20.142 Namespace Write Protected: No 00:08:20.142 Number of LBA Formats: 8 00:08:20.142 Current LBA Format: LBA Format #04 00:08:20.142 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.142 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.142 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.142 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.142 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.142 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.142 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.142 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.142 00:08:20.142 NVM Specific Namespace Data 00:08:20.142 =========================== 00:08:20.142 Logical Block Storage Tag Mask: 0 00:08:20.142 Protection Information Capabilities: 00:08:20.142 16b Guard Protection Information Storage Tag Support: No 00:08:20.142 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.142 Storage Tag Check Read Support: No 00:08:20.142 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.142 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.142 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:20.402 ===================================================== 00:08:20.402 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.402 ===================================================== 00:08:20.402 Controller Capabilities/Features 00:08:20.402 ================================ 00:08:20.402 Vendor ID: 1b36 00:08:20.402 Subsystem Vendor ID: 1af4 00:08:20.402 Serial Number: 12340 00:08:20.402 Model Number: QEMU NVMe Ctrl 00:08:20.402 Firmware Version: 8.0.0 00:08:20.402 Recommended Arb Burst: 6 00:08:20.402 IEEE OUI Identifier: 00 54 52 00:08:20.402 Multi-path I/O 00:08:20.402 May have multiple subsystem ports: No 00:08:20.402 May have multiple controllers: No 00:08:20.402 Associated with SR-IOV VF: No 00:08:20.402 Max Data Transfer Size: 524288 00:08:20.402 Max Number of Namespaces: 256 00:08:20.402 Max Number of I/O Queues: 64 00:08:20.402 NVMe Specification Version (VS): 1.4 00:08:20.402 NVMe Specification Version (Identify): 1.4 00:08:20.402 Maximum Queue Entries: 2048 00:08:20.402 Contiguous Queues Required: Yes 00:08:20.402 Arbitration Mechanisms Supported 00:08:20.402 Weighted Round Robin: Not Supported 00:08:20.402 Vendor Specific: Not Supported 00:08:20.402 Reset Timeout: 7500 ms 00:08:20.402 Doorbell Stride: 4 bytes 00:08:20.402 NVM Subsystem Reset: Not Supported 00:08:20.402 Command Sets Supported 00:08:20.402 NVM Command Set: Supported 00:08:20.402 Boot Partition: Not Supported 00:08:20.402 Memory Page Size Minimum: 4096 bytes 00:08:20.402 Memory Page Size Maximum: 65536 bytes 00:08:20.402 Persistent Memory Region: Not Supported 00:08:20.402 Optional Asynchronous Events Supported 00:08:20.402 Namespace Attribute Notices: Supported 00:08:20.402 Firmware Activation Notices: Not Supported 00:08:20.402 ANA Change Notices: Not Supported 00:08:20.402 PLE Aggregate Log Change Notices: Not Supported 00:08:20.402 LBA Status Info Alert Notices: Not Supported 00:08:20.402 EGE Aggregate Log Change Notices: Not Supported 00:08:20.402 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.402 Zone Descriptor Change Notices: Not Supported 00:08:20.402 Discovery Log Change Notices: Not Supported 00:08:20.402 Controller Attributes 00:08:20.402 128-bit Host Identifier: Not Supported 00:08:20.402 Non-Operational Permissive Mode: Not Supported 00:08:20.402 NVM Sets: Not Supported 00:08:20.402 Read Recovery Levels: Not Supported 00:08:20.402 Endurance Groups: Not Supported 00:08:20.402 Predictable Latency Mode: Not Supported 00:08:20.402 Traffic Based Keep ALive: Not Supported 00:08:20.403 Namespace Granularity: Not Supported 00:08:20.403 SQ Associations: Not Supported 00:08:20.403 UUID List: Not Supported 00:08:20.403 Multi-Domain Subsystem: Not Supported 00:08:20.403 Fixed Capacity Management: Not Supported 00:08:20.403 Variable Capacity Management: Not Supported 00:08:20.403 Delete Endurance Group: Not Supported 00:08:20.403 Delete NVM Set: Not Supported 00:08:20.403 Extended LBA Formats Supported: Supported 00:08:20.403 Flexible Data Placement Supported: Not Supported 00:08:20.403 00:08:20.403 Controller Memory Buffer Support 00:08:20.403 ================================ 00:08:20.403 Supported: No 00:08:20.403 00:08:20.403 Persistent Memory Region Support 00:08:20.403 ================================ 00:08:20.403 Supported: No 00:08:20.403 00:08:20.403 Admin Command Set Attributes 00:08:20.403 ============================ 00:08:20.403 Security Send/Receive: Not Supported 00:08:20.403 Format NVM: Supported 00:08:20.403 Firmware Activate/Download: Not Supported 00:08:20.403 Namespace Management: Supported 00:08:20.403 Device Self-Test: Not Supported 00:08:20.403 Directives: Supported 00:08:20.403 NVMe-MI: Not Supported 00:08:20.403 Virtualization Management: Not Supported 00:08:20.403 Doorbell Buffer Config: Supported 00:08:20.403 Get LBA Status Capability: Not Supported 00:08:20.403 Command & Feature Lockdown Capability: Not Supported 00:08:20.403 Abort Command Limit: 4 00:08:20.403 Async Event Request Limit: 4 00:08:20.403 Number of Firmware Slots: N/A 00:08:20.403 Firmware Slot 1 Read-Only: N/A 00:08:20.403 Firmware Activation Without Reset: N/A 00:08:20.403 Multiple Update Detection Support: N/A 00:08:20.403 Firmware Update Granularity: No Information Provided 00:08:20.403 Per-Namespace SMART Log: Yes 00:08:20.403 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.403 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:20.403 Command Effects Log Page: Supported 00:08:20.403 Get Log Page Extended Data: Supported 00:08:20.403 Telemetry Log Pages: Not Supported 00:08:20.403 Persistent Event Log Pages: Not Supported 00:08:20.403 Supported Log Pages Log Page: May Support 00:08:20.403 Commands Supported & Effects Log Page: Not Supported 00:08:20.403 Feature Identifiers & Effects Log Page:May Support 00:08:20.403 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.403 Data Area 4 for Telemetry Log: Not Supported 00:08:20.403 Error Log Page Entries Supported: 1 00:08:20.403 Keep Alive: Not Supported 00:08:20.403 00:08:20.403 NVM Command Set Attributes 00:08:20.403 ========================== 00:08:20.403 Submission Queue Entry Size 00:08:20.403 Max: 64 00:08:20.403 Min: 64 00:08:20.403 Completion Queue Entry Size 00:08:20.403 Max: 16 00:08:20.403 Min: 16 00:08:20.403 Number of Namespaces: 256 00:08:20.403 Compare Command: Supported 00:08:20.403 Write Uncorrectable Command: Not Supported 00:08:20.403 Dataset Management Command: Supported 00:08:20.403 Write Zeroes Command: Supported 00:08:20.403 Set Features Save Field: Supported 00:08:20.403 Reservations: Not Supported 00:08:20.403 Timestamp: Supported 00:08:20.403 Copy: Supported 00:08:20.403 Volatile Write Cache: Present 00:08:20.403 Atomic Write Unit (Normal): 1 00:08:20.403 Atomic Write Unit (PFail): 1 00:08:20.403 Atomic Compare & Write Unit: 1 00:08:20.403 Fused Compare & Write: Not Supported 00:08:20.403 Scatter-Gather List 00:08:20.403 SGL Command Set: Supported 00:08:20.403 SGL Keyed: Not Supported 00:08:20.403 SGL Bit Bucket Descriptor: Not Supported 00:08:20.403 SGL Metadata Pointer: Not Supported 00:08:20.403 Oversized SGL: Not Supported 00:08:20.403 SGL Metadata Address: Not Supported 00:08:20.403 SGL Offset: Not Supported 00:08:20.403 Transport SGL Data Block: Not Supported 00:08:20.403 Replay Protected Memory Block: Not Supported 00:08:20.403 00:08:20.403 Firmware Slot Information 00:08:20.403 ========================= 00:08:20.403 Active slot: 1 00:08:20.403 Slot 1 Firmware Revision: 1.0 00:08:20.403 00:08:20.403 00:08:20.403 Commands Supported and Effects 00:08:20.403 ============================== 00:08:20.403 Admin Commands 00:08:20.403 -------------- 00:08:20.403 Delete I/O Submission Queue (00h): Supported 00:08:20.403 Create I/O Submission Queue (01h): Supported 00:08:20.403 Get Log Page (02h): Supported 00:08:20.403 Delete I/O Completion Queue (04h): Supported 00:08:20.403 Create I/O Completion Queue (05h): Supported 00:08:20.403 Identify (06h): Supported 00:08:20.403 Abort (08h): Supported 00:08:20.403 Set Features (09h): Supported 00:08:20.403 Get Features (0Ah): Supported 00:08:20.403 Asynchronous Event Request (0Ch): Supported 00:08:20.403 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.403 Directive Send (19h): Supported 00:08:20.403 Directive Receive (1Ah): Supported 00:08:20.403 Virtualization Management (1Ch): Supported 00:08:20.403 Doorbell Buffer Config (7Ch): Supported 00:08:20.403 Format NVM (80h): Supported LBA-Change 00:08:20.403 I/O Commands 00:08:20.403 ------------ 00:08:20.403 Flush (00h): Supported LBA-Change 00:08:20.403 Write (01h): Supported LBA-Change 00:08:20.403 Read (02h): Supported 00:08:20.403 Compare (05h): Supported 00:08:20.403 Write Zeroes (08h): Supported LBA-Change 00:08:20.403 Dataset Management (09h): Supported LBA-Change 00:08:20.403 Unknown (0Ch): Supported 00:08:20.403 Unknown (12h): Supported 00:08:20.403 Copy (19h): Supported LBA-Change 00:08:20.403 Unknown (1Dh): Supported LBA-Change 00:08:20.403 00:08:20.403 Error Log 00:08:20.403 ========= 00:08:20.403 00:08:20.403 Arbitration 00:08:20.403 =========== 00:08:20.403 Arbitration Burst: no limit 00:08:20.403 00:08:20.403 Power Management 00:08:20.403 ================ 00:08:20.403 Number of Power States: 1 00:08:20.403 Current Power State: Power State #0 00:08:20.403 Power State #0: 00:08:20.403 Max Power: 25.00 W 00:08:20.403 Non-Operational State: Operational 00:08:20.403 Entry Latency: 16 microseconds 00:08:20.403 Exit Latency: 4 microseconds 00:08:20.403 Relative Read Throughput: 0 00:08:20.403 Relative Read Latency: 0 00:08:20.403 Relative Write Throughput: 0 00:08:20.403 Relative Write Latency: 0 00:08:20.403 Idle Power: Not Reported 00:08:20.403 Active Power: Not Reported 00:08:20.403 Non-Operational Permissive Mode: Not Supported 00:08:20.403 00:08:20.403 Health Information 00:08:20.403 ================== 00:08:20.403 Critical Warnings: 00:08:20.403 Available Spare Space: OK 00:08:20.403 Temperature: OK 00:08:20.403 Device Reliability: OK 00:08:20.403 Read Only: No 00:08:20.403 Volatile Memory Backup: OK 00:08:20.403 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.403 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.403 Available Spare: 0% 00:08:20.403 Available Spare Threshold: 0% 00:08:20.403 Life Percentage Used: 0% 00:08:20.403 Data Units Read: 699 00:08:20.403 Data Units Written: 627 00:08:20.403 Host Read Commands: 33742 00:08:20.403 Host Write Commands: 33528 00:08:20.403 Controller Busy Time: 0 minutes 00:08:20.403 Power Cycles: 0 00:08:20.403 Power On Hours: 0 hours 00:08:20.403 Unsafe Shutdowns: 0 00:08:20.403 Unrecoverable Media Errors: 0 00:08:20.403 Lifetime Error Log Entries: 0 00:08:20.403 Warning Temperature Time: 0 minutes 00:08:20.403 Critical Temperature Time: 0 minutes 00:08:20.403 00:08:20.403 Number of Queues 00:08:20.403 ================ 00:08:20.403 Number of I/O Submission Queues: 64 00:08:20.403 Number of I/O Completion Queues: 64 00:08:20.403 00:08:20.403 ZNS Specific Controller Data 00:08:20.403 ============================ 00:08:20.403 Zone Append Size Limit: 0 00:08:20.403 00:08:20.403 00:08:20.403 Active Namespaces 00:08:20.403 ================= 00:08:20.403 Namespace ID:1 00:08:20.403 Error Recovery Timeout: Unlimited 00:08:20.403 Command Set Identifier: NVM (00h) 00:08:20.403 Deallocate: Supported 00:08:20.403 Deallocated/Unwritten Error: Supported 00:08:20.403 Deallocated Read Value: All 0x00 00:08:20.403 Deallocate in Write Zeroes: Not Supported 00:08:20.403 Deallocated Guard Field: 0xFFFF 00:08:20.403 Flush: Supported 00:08:20.403 Reservation: Not Supported 00:08:20.403 Metadata Transferred as: Separate Metadata Buffer 00:08:20.403 Namespace Sharing Capabilities: Private 00:08:20.403 Size (in LBAs): 1548666 (5GiB) 00:08:20.403 Capacity (in LBAs): 1548666 (5GiB) 00:08:20.403 Utilization (in LBAs): 1548666 (5GiB) 00:08:20.403 Thin Provisioning: Not Supported 00:08:20.403 Per-NS Atomic Units: No 00:08:20.403 Maximum Single Source Range Length: 128 00:08:20.404 Maximum Copy Length: 128 00:08:20.404 Maximum Source Range Count: 128 00:08:20.404 NGUID/EUI64 Never Reused: No 00:08:20.404 Namespace Write Protected: No 00:08:20.404 Number of LBA Formats: 8 00:08:20.404 Current LBA Format: LBA Format #07 00:08:20.404 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.404 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.404 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.404 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.404 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.404 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.404 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.404 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.404 00:08:20.404 NVM Specific Namespace Data 00:08:20.404 =========================== 00:08:20.404 Logical Block Storage Tag Mask: 0 00:08:20.404 Protection Information Capabilities: 00:08:20.404 16b Guard Protection Information Storage Tag Support: No 00:08:20.404 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.404 Storage Tag Check Read Support: No 00:08:20.404 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.404 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.404 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:20.663 ===================================================== 00:08:20.663 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.663 ===================================================== 00:08:20.663 Controller Capabilities/Features 00:08:20.663 ================================ 00:08:20.663 Vendor ID: 1b36 00:08:20.663 Subsystem Vendor ID: 1af4 00:08:20.663 Serial Number: 12341 00:08:20.663 Model Number: QEMU NVMe Ctrl 00:08:20.663 Firmware Version: 8.0.0 00:08:20.663 Recommended Arb Burst: 6 00:08:20.663 IEEE OUI Identifier: 00 54 52 00:08:20.663 Multi-path I/O 00:08:20.664 May have multiple subsystem ports: No 00:08:20.664 May have multiple controllers: No 00:08:20.664 Associated with SR-IOV VF: No 00:08:20.664 Max Data Transfer Size: 524288 00:08:20.664 Max Number of Namespaces: 256 00:08:20.664 Max Number of I/O Queues: 64 00:08:20.664 NVMe Specification Version (VS): 1.4 00:08:20.664 NVMe Specification Version (Identify): 1.4 00:08:20.664 Maximum Queue Entries: 2048 00:08:20.664 Contiguous Queues Required: Yes 00:08:20.664 Arbitration Mechanisms Supported 00:08:20.664 Weighted Round Robin: Not Supported 00:08:20.664 Vendor Specific: Not Supported 00:08:20.664 Reset Timeout: 7500 ms 00:08:20.664 Doorbell Stride: 4 bytes 00:08:20.664 NVM Subsystem Reset: Not Supported 00:08:20.664 Command Sets Supported 00:08:20.664 NVM Command Set: Supported 00:08:20.664 Boot Partition: Not Supported 00:08:20.664 Memory Page Size Minimum: 4096 bytes 00:08:20.664 Memory Page Size Maximum: 65536 bytes 00:08:20.664 Persistent Memory Region: Not Supported 00:08:20.664 Optional Asynchronous Events Supported 00:08:20.664 Namespace Attribute Notices: Supported 00:08:20.664 Firmware Activation Notices: Not Supported 00:08:20.664 ANA Change Notices: Not Supported 00:08:20.664 PLE Aggregate Log Change Notices: Not Supported 00:08:20.664 LBA Status Info Alert Notices: Not Supported 00:08:20.664 EGE Aggregate Log Change Notices: Not Supported 00:08:20.664 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.664 Zone Descriptor Change Notices: Not Supported 00:08:20.664 Discovery Log Change Notices: Not Supported 00:08:20.664 Controller Attributes 00:08:20.664 128-bit Host Identifier: Not Supported 00:08:20.664 Non-Operational Permissive Mode: Not Supported 00:08:20.664 NVM Sets: Not Supported 00:08:20.664 Read Recovery Levels: Not Supported 00:08:20.664 Endurance Groups: Not Supported 00:08:20.664 Predictable Latency Mode: Not Supported 00:08:20.664 Traffic Based Keep ALive: Not Supported 00:08:20.664 Namespace Granularity: Not Supported 00:08:20.664 SQ Associations: Not Supported 00:08:20.664 UUID List: Not Supported 00:08:20.664 Multi-Domain Subsystem: Not Supported 00:08:20.664 Fixed Capacity Management: Not Supported 00:08:20.664 Variable Capacity Management: Not Supported 00:08:20.664 Delete Endurance Group: Not Supported 00:08:20.664 Delete NVM Set: Not Supported 00:08:20.664 Extended LBA Formats Supported: Supported 00:08:20.664 Flexible Data Placement Supported: Not Supported 00:08:20.664 00:08:20.664 Controller Memory Buffer Support 00:08:20.664 ================================ 00:08:20.664 Supported: No 00:08:20.664 00:08:20.664 Persistent Memory Region Support 00:08:20.664 ================================ 00:08:20.664 Supported: No 00:08:20.664 00:08:20.664 Admin Command Set Attributes 00:08:20.664 ============================ 00:08:20.664 Security Send/Receive: Not Supported 00:08:20.664 Format NVM: Supported 00:08:20.664 Firmware Activate/Download: Not Supported 00:08:20.664 Namespace Management: Supported 00:08:20.664 Device Self-Test: Not Supported 00:08:20.664 Directives: Supported 00:08:20.664 NVMe-MI: Not Supported 00:08:20.664 Virtualization Management: Not Supported 00:08:20.664 Doorbell Buffer Config: Supported 00:08:20.664 Get LBA Status Capability: Not Supported 00:08:20.664 Command & Feature Lockdown Capability: Not Supported 00:08:20.664 Abort Command Limit: 4 00:08:20.664 Async Event Request Limit: 4 00:08:20.664 Number of Firmware Slots: N/A 00:08:20.664 Firmware Slot 1 Read-Only: N/A 00:08:20.664 Firmware Activation Without Reset: N/A 00:08:20.664 Multiple Update Detection Support: N/A 00:08:20.664 Firmware Update Granularity: No Information Provided 00:08:20.664 Per-Namespace SMART Log: Yes 00:08:20.664 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.664 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:20.664 Command Effects Log Page: Supported 00:08:20.664 Get Log Page Extended Data: Supported 00:08:20.664 Telemetry Log Pages: Not Supported 00:08:20.664 Persistent Event Log Pages: Not Supported 00:08:20.664 Supported Log Pages Log Page: May Support 00:08:20.664 Commands Supported & Effects Log Page: Not Supported 00:08:20.664 Feature Identifiers & Effects Log Page:May Support 00:08:20.664 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.664 Data Area 4 for Telemetry Log: Not Supported 00:08:20.664 Error Log Page Entries Supported: 1 00:08:20.664 Keep Alive: Not Supported 00:08:20.664 00:08:20.664 NVM Command Set Attributes 00:08:20.664 ========================== 00:08:20.664 Submission Queue Entry Size 00:08:20.664 Max: 64 00:08:20.664 Min: 64 00:08:20.664 Completion Queue Entry Size 00:08:20.664 Max: 16 00:08:20.664 Min: 16 00:08:20.664 Number of Namespaces: 256 00:08:20.664 Compare Command: Supported 00:08:20.664 Write Uncorrectable Command: Not Supported 00:08:20.664 Dataset Management Command: Supported 00:08:20.664 Write Zeroes Command: Supported 00:08:20.664 Set Features Save Field: Supported 00:08:20.664 Reservations: Not Supported 00:08:20.664 Timestamp: Supported 00:08:20.664 Copy: Supported 00:08:20.664 Volatile Write Cache: Present 00:08:20.664 Atomic Write Unit (Normal): 1 00:08:20.664 Atomic Write Unit (PFail): 1 00:08:20.664 Atomic Compare & Write Unit: 1 00:08:20.664 Fused Compare & Write: Not Supported 00:08:20.664 Scatter-Gather List 00:08:20.664 SGL Command Set: Supported 00:08:20.664 SGL Keyed: Not Supported 00:08:20.664 SGL Bit Bucket Descriptor: Not Supported 00:08:20.664 SGL Metadata Pointer: Not Supported 00:08:20.664 Oversized SGL: Not Supported 00:08:20.664 SGL Metadata Address: Not Supported 00:08:20.664 SGL Offset: Not Supported 00:08:20.664 Transport SGL Data Block: Not Supported 00:08:20.664 Replay Protected Memory Block: Not Supported 00:08:20.664 00:08:20.664 Firmware Slot Information 00:08:20.664 ========================= 00:08:20.664 Active slot: 1 00:08:20.664 Slot 1 Firmware Revision: 1.0 00:08:20.664 00:08:20.664 00:08:20.664 Commands Supported and Effects 00:08:20.664 ============================== 00:08:20.664 Admin Commands 00:08:20.664 -------------- 00:08:20.664 Delete I/O Submission Queue (00h): Supported 00:08:20.664 Create I/O Submission Queue (01h): Supported 00:08:20.664 Get Log Page (02h): Supported 00:08:20.664 Delete I/O Completion Queue (04h): Supported 00:08:20.664 Create I/O Completion Queue (05h): Supported 00:08:20.664 Identify (06h): Supported 00:08:20.664 Abort (08h): Supported 00:08:20.664 Set Features (09h): Supported 00:08:20.664 Get Features (0Ah): Supported 00:08:20.664 Asynchronous Event Request (0Ch): Supported 00:08:20.664 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.664 Directive Send (19h): Supported 00:08:20.664 Directive Receive (1Ah): Supported 00:08:20.664 Virtualization Management (1Ch): Supported 00:08:20.664 Doorbell Buffer Config (7Ch): Supported 00:08:20.664 Format NVM (80h): Supported LBA-Change 00:08:20.664 I/O Commands 00:08:20.664 ------------ 00:08:20.664 Flush (00h): Supported LBA-Change 00:08:20.664 Write (01h): Supported LBA-Change 00:08:20.664 Read (02h): Supported 00:08:20.664 Compare (05h): Supported 00:08:20.664 Write Zeroes (08h): Supported LBA-Change 00:08:20.664 Dataset Management (09h): Supported LBA-Change 00:08:20.664 Unknown (0Ch): Supported 00:08:20.664 Unknown (12h): Supported 00:08:20.664 Copy (19h): Supported LBA-Change 00:08:20.664 Unknown (1Dh): Supported LBA-Change 00:08:20.664 00:08:20.664 Error Log 00:08:20.664 ========= 00:08:20.664 00:08:20.664 Arbitration 00:08:20.664 =========== 00:08:20.664 Arbitration Burst: no limit 00:08:20.664 00:08:20.664 Power Management 00:08:20.664 ================ 00:08:20.664 Number of Power States: 1 00:08:20.664 Current Power State: Power State #0 00:08:20.664 Power State #0: 00:08:20.664 Max Power: 25.00 W 00:08:20.664 Non-Operational State: Operational 00:08:20.664 Entry Latency: 16 microseconds 00:08:20.664 Exit Latency: 4 microseconds 00:08:20.664 Relative Read Throughput: 0 00:08:20.664 Relative Read Latency: 0 00:08:20.664 Relative Write Throughput: 0 00:08:20.664 Relative Write Latency: 0 00:08:20.664 Idle Power: Not Reported 00:08:20.664 Active Power: Not Reported 00:08:20.664 Non-Operational Permissive Mode: Not Supported 00:08:20.664 00:08:20.664 Health Information 00:08:20.664 ================== 00:08:20.664 Critical Warnings: 00:08:20.665 Available Spare Space: OK 00:08:20.665 Temperature: OK 00:08:20.665 Device Reliability: OK 00:08:20.665 Read Only: No 00:08:20.665 Volatile Memory Backup: OK 00:08:20.665 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.665 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.665 Available Spare: 0% 00:08:20.665 Available Spare Threshold: 0% 00:08:20.665 Life Percentage Used: 0% 00:08:20.665 Data Units Read: 1061 00:08:20.665 Data Units Written: 934 00:08:20.665 Host Read Commands: 50065 00:08:20.665 Host Write Commands: 48948 00:08:20.665 Controller Busy Time: 0 minutes 00:08:20.665 Power Cycles: 0 00:08:20.665 Power On Hours: 0 hours 00:08:20.665 Unsafe Shutdowns: 0 00:08:20.665 Unrecoverable Media Errors: 0 00:08:20.665 Lifetime Error Log Entries: 0 00:08:20.665 Warning Temperature Time: 0 minutes 00:08:20.665 Critical Temperature Time: 0 minutes 00:08:20.665 00:08:20.665 Number of Queues 00:08:20.665 ================ 00:08:20.665 Number of I/O Submission Queues: 64 00:08:20.665 Number of I/O Completion Queues: 64 00:08:20.665 00:08:20.665 ZNS Specific Controller Data 00:08:20.665 ============================ 00:08:20.665 Zone Append Size Limit: 0 00:08:20.665 00:08:20.665 00:08:20.665 Active Namespaces 00:08:20.665 ================= 00:08:20.665 Namespace ID:1 00:08:20.665 Error Recovery Timeout: Unlimited 00:08:20.665 Command Set Identifier: NVM (00h) 00:08:20.665 Deallocate: Supported 00:08:20.665 Deallocated/Unwritten Error: Supported 00:08:20.665 Deallocated Read Value: All 0x00 00:08:20.665 Deallocate in Write Zeroes: Not Supported 00:08:20.665 Deallocated Guard Field: 0xFFFF 00:08:20.665 Flush: Supported 00:08:20.665 Reservation: Not Supported 00:08:20.665 Namespace Sharing Capabilities: Private 00:08:20.665 Size (in LBAs): 1310720 (5GiB) 00:08:20.665 Capacity (in LBAs): 1310720 (5GiB) 00:08:20.665 Utilization (in LBAs): 1310720 (5GiB) 00:08:20.665 Thin Provisioning: Not Supported 00:08:20.665 Per-NS Atomic Units: No 00:08:20.665 Maximum Single Source Range Length: 128 00:08:20.665 Maximum Copy Length: 128 00:08:20.665 Maximum Source Range Count: 128 00:08:20.665 NGUID/EUI64 Never Reused: No 00:08:20.665 Namespace Write Protected: No 00:08:20.665 Number of LBA Formats: 8 00:08:20.665 Current LBA Format: LBA Format #04 00:08:20.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.665 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.665 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.665 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.665 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.665 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.665 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.665 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.665 00:08:20.665 NVM Specific Namespace Data 00:08:20.665 =========================== 00:08:20.665 Logical Block Storage Tag Mask: 0 00:08:20.665 Protection Information Capabilities: 00:08:20.665 16b Guard Protection Information Storage Tag Support: No 00:08:20.665 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.665 Storage Tag Check Read Support: No 00:08:20.665 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.665 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.665 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:20.665 ===================================================== 00:08:20.665 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.665 ===================================================== 00:08:20.665 Controller Capabilities/Features 00:08:20.665 ================================ 00:08:20.665 Vendor ID: 1b36 00:08:20.665 Subsystem Vendor ID: 1af4 00:08:20.665 Serial Number: 12342 00:08:20.665 Model Number: QEMU NVMe Ctrl 00:08:20.665 Firmware Version: 8.0.0 00:08:20.665 Recommended Arb Burst: 6 00:08:20.665 IEEE OUI Identifier: 00 54 52 00:08:20.665 Multi-path I/O 00:08:20.665 May have multiple subsystem ports: No 00:08:20.665 May have multiple controllers: No 00:08:20.665 Associated with SR-IOV VF: No 00:08:20.665 Max Data Transfer Size: 524288 00:08:20.665 Max Number of Namespaces: 256 00:08:20.665 Max Number of I/O Queues: 64 00:08:20.665 NVMe Specification Version (VS): 1.4 00:08:20.665 NVMe Specification Version (Identify): 1.4 00:08:20.665 Maximum Queue Entries: 2048 00:08:20.665 Contiguous Queues Required: Yes 00:08:20.665 Arbitration Mechanisms Supported 00:08:20.665 Weighted Round Robin: Not Supported 00:08:20.665 Vendor Specific: Not Supported 00:08:20.665 Reset Timeout: 7500 ms 00:08:20.665 Doorbell Stride: 4 bytes 00:08:20.665 NVM Subsystem Reset: Not Supported 00:08:20.665 Command Sets Supported 00:08:20.665 NVM Command Set: Supported 00:08:20.665 Boot Partition: Not Supported 00:08:20.665 Memory Page Size Minimum: 4096 bytes 00:08:20.665 Memory Page Size Maximum: 65536 bytes 00:08:20.665 Persistent Memory Region: Not Supported 00:08:20.665 Optional Asynchronous Events Supported 00:08:20.665 Namespace Attribute Notices: Supported 00:08:20.665 Firmware Activation Notices: Not Supported 00:08:20.665 ANA Change Notices: Not Supported 00:08:20.665 PLE Aggregate Log Change Notices: Not Supported 00:08:20.665 LBA Status Info Alert Notices: Not Supported 00:08:20.665 EGE Aggregate Log Change Notices: Not Supported 00:08:20.665 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.665 Zone Descriptor Change Notices: Not Supported 00:08:20.665 Discovery Log Change Notices: Not Supported 00:08:20.665 Controller Attributes 00:08:20.665 128-bit Host Identifier: Not Supported 00:08:20.665 Non-Operational Permissive Mode: Not Supported 00:08:20.665 NVM Sets: Not Supported 00:08:20.665 Read Recovery Levels: Not Supported 00:08:20.665 Endurance Groups: Not Supported 00:08:20.665 Predictable Latency Mode: Not Supported 00:08:20.665 Traffic Based Keep ALive: Not Supported 00:08:20.665 Namespace Granularity: Not Supported 00:08:20.665 SQ Associations: Not Supported 00:08:20.665 UUID List: Not Supported 00:08:20.665 Multi-Domain Subsystem: Not Supported 00:08:20.665 Fixed Capacity Management: Not Supported 00:08:20.665 Variable Capacity Management: Not Supported 00:08:20.665 Delete Endurance Group: Not Supported 00:08:20.665 Delete NVM Set: Not Supported 00:08:20.665 Extended LBA Formats Supported: Supported 00:08:20.665 Flexible Data Placement Supported: Not Supported 00:08:20.665 00:08:20.665 Controller Memory Buffer Support 00:08:20.665 ================================ 00:08:20.665 Supported: No 00:08:20.665 00:08:20.665 Persistent Memory Region Support 00:08:20.665 ================================ 00:08:20.665 Supported: No 00:08:20.665 00:08:20.665 Admin Command Set Attributes 00:08:20.665 ============================ 00:08:20.665 Security Send/Receive: Not Supported 00:08:20.665 Format NVM: Supported 00:08:20.665 Firmware Activate/Download: Not Supported 00:08:20.665 Namespace Management: Supported 00:08:20.665 Device Self-Test: Not Supported 00:08:20.665 Directives: Supported 00:08:20.665 NVMe-MI: Not Supported 00:08:20.665 Virtualization Management: Not Supported 00:08:20.665 Doorbell Buffer Config: Supported 00:08:20.665 Get LBA Status Capability: Not Supported 00:08:20.665 Command & Feature Lockdown Capability: Not Supported 00:08:20.665 Abort Command Limit: 4 00:08:20.665 Async Event Request Limit: 4 00:08:20.665 Number of Firmware Slots: N/A 00:08:20.665 Firmware Slot 1 Read-Only: N/A 00:08:20.665 Firmware Activation Without Reset: N/A 00:08:20.665 Multiple Update Detection Support: N/A 00:08:20.665 Firmware Update Granularity: No Information Provided 00:08:20.665 Per-Namespace SMART Log: Yes 00:08:20.665 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.665 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:20.665 Command Effects Log Page: Supported 00:08:20.665 Get Log Page Extended Data: Supported 00:08:20.665 Telemetry Log Pages: Not Supported 00:08:20.665 Persistent Event Log Pages: Not Supported 00:08:20.665 Supported Log Pages Log Page: May Support 00:08:20.665 Commands Supported & Effects Log Page: Not Supported 00:08:20.665 Feature Identifiers & Effects Log Page:May Support 00:08:20.666 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.666 Data Area 4 for Telemetry Log: Not Supported 00:08:20.666 Error Log Page Entries Supported: 1 00:08:20.666 Keep Alive: Not Supported 00:08:20.666 00:08:20.666 NVM Command Set Attributes 00:08:20.666 ========================== 00:08:20.666 Submission Queue Entry Size 00:08:20.666 Max: 64 00:08:20.666 Min: 64 00:08:20.666 Completion Queue Entry Size 00:08:20.666 Max: 16 00:08:20.666 Min: 16 00:08:20.666 Number of Namespaces: 256 00:08:20.666 Compare Command: Supported 00:08:20.666 Write Uncorrectable Command: Not Supported 00:08:20.666 Dataset Management Command: Supported 00:08:20.666 Write Zeroes Command: Supported 00:08:20.666 Set Features Save Field: Supported 00:08:20.666 Reservations: Not Supported 00:08:20.666 Timestamp: Supported 00:08:20.666 Copy: Supported 00:08:20.666 Volatile Write Cache: Present 00:08:20.666 Atomic Write Unit (Normal): 1 00:08:20.666 Atomic Write Unit (PFail): 1 00:08:20.666 Atomic Compare & Write Unit: 1 00:08:20.666 Fused Compare & Write: Not Supported 00:08:20.666 Scatter-Gather List 00:08:20.666 SGL Command Set: Supported 00:08:20.666 SGL Keyed: Not Supported 00:08:20.666 SGL Bit Bucket Descriptor: Not Supported 00:08:20.666 SGL Metadata Pointer: Not Supported 00:08:20.666 Oversized SGL: Not Supported 00:08:20.666 SGL Metadata Address: Not Supported 00:08:20.666 SGL Offset: Not Supported 00:08:20.666 Transport SGL Data Block: Not Supported 00:08:20.666 Replay Protected Memory Block: Not Supported 00:08:20.666 00:08:20.666 Firmware Slot Information 00:08:20.666 ========================= 00:08:20.666 Active slot: 1 00:08:20.666 Slot 1 Firmware Revision: 1.0 00:08:20.666 00:08:20.666 00:08:20.666 Commands Supported and Effects 00:08:20.666 ============================== 00:08:20.666 Admin Commands 00:08:20.666 -------------- 00:08:20.666 Delete I/O Submission Queue (00h): Supported 00:08:20.666 Create I/O Submission Queue (01h): Supported 00:08:20.666 Get Log Page (02h): Supported 00:08:20.666 Delete I/O Completion Queue (04h): Supported 00:08:20.666 Create I/O Completion Queue (05h): Supported 00:08:20.666 Identify (06h): Supported 00:08:20.666 Abort (08h): Supported 00:08:20.666 Set Features (09h): Supported 00:08:20.666 Get Features (0Ah): Supported 00:08:20.666 Asynchronous Event Request (0Ch): Supported 00:08:20.666 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.666 Directive Send (19h): Supported 00:08:20.666 Directive Receive (1Ah): Supported 00:08:20.666 Virtualization Management (1Ch): Supported 00:08:20.666 Doorbell Buffer Config (7Ch): Supported 00:08:20.666 Format NVM (80h): Supported LBA-Change 00:08:20.666 I/O Commands 00:08:20.666 ------------ 00:08:20.666 Flush (00h): Supported LBA-Change 00:08:20.666 Write (01h): Supported LBA-Change 00:08:20.666 Read (02h): Supported 00:08:20.666 Compare (05h): Supported 00:08:20.666 Write Zeroes (08h): Supported LBA-Change 00:08:20.666 Dataset Management (09h): Supported LBA-Change 00:08:20.666 Unknown (0Ch): Supported 00:08:20.666 Unknown (12h): Supported 00:08:20.666 Copy (19h): Supported LBA-Change 00:08:20.666 Unknown (1Dh): Supported LBA-Change 00:08:20.666 00:08:20.666 Error Log 00:08:20.666 ========= 00:08:20.666 00:08:20.666 Arbitration 00:08:20.666 =========== 00:08:20.666 Arbitration Burst: no limit 00:08:20.666 00:08:20.666 Power Management 00:08:20.666 ================ 00:08:20.666 Number of Power States: 1 00:08:20.666 Current Power State: Power State #0 00:08:20.666 Power State #0: 00:08:20.666 Max Power: 25.00 W 00:08:20.666 Non-Operational State: Operational 00:08:20.666 Entry Latency: 16 microseconds 00:08:20.666 Exit Latency: 4 microseconds 00:08:20.666 Relative Read Throughput: 0 00:08:20.666 Relative Read Latency: 0 00:08:20.666 Relative Write Throughput: 0 00:08:20.666 Relative Write Latency: 0 00:08:20.666 Idle Power: Not Reported 00:08:20.666 Active Power: Not Reported 00:08:20.666 Non-Operational Permissive Mode: Not Supported 00:08:20.666 00:08:20.666 Health Information 00:08:20.666 ================== 00:08:20.666 Critical Warnings: 00:08:20.666 Available Spare Space: OK 00:08:20.666 Temperature: OK 00:08:20.666 Device Reliability: OK 00:08:20.666 Read Only: No 00:08:20.666 Volatile Memory Backup: OK 00:08:20.666 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.666 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.666 Available Spare: 0% 00:08:20.666 Available Spare Threshold: 0% 00:08:20.666 Life Percentage Used: 0% 00:08:20.666 Data Units Read: 2222 00:08:20.666 Data Units Written: 2009 00:08:20.666 Host Read Commands: 102740 00:08:20.666 Host Write Commands: 101009 00:08:20.666 Controller Busy Time: 0 minutes 00:08:20.666 Power Cycles: 0 00:08:20.666 Power On Hours: 0 hours 00:08:20.666 Unsafe Shutdowns: 0 00:08:20.666 Unrecoverable Media Errors: 0 00:08:20.666 Lifetime Error Log Entries: 0 00:08:20.666 Warning Temperature Time: 0 minutes 00:08:20.666 Critical Temperature Time: 0 minutes 00:08:20.666 00:08:20.666 Number of Queues 00:08:20.666 ================ 00:08:20.666 Number of I/O Submission Queues: 64 00:08:20.666 Number of I/O Completion Queues: 64 00:08:20.666 00:08:20.666 ZNS Specific Controller Data 00:08:20.666 ============================ 00:08:20.666 Zone Append Size Limit: 0 00:08:20.666 00:08:20.666 00:08:20.666 Active Namespaces 00:08:20.666 ================= 00:08:20.666 Namespace ID:1 00:08:20.666 Error Recovery Timeout: Unlimited 00:08:20.666 Command Set Identifier: NVM (00h) 00:08:20.666 Deallocate: Supported 00:08:20.666 Deallocated/Unwritten Error: Supported 00:08:20.666 Deallocated Read Value: All 0x00 00:08:20.666 Deallocate in Write Zeroes: Not Supported 00:08:20.666 Deallocated Guard Field: 0xFFFF 00:08:20.666 Flush: Supported 00:08:20.666 Reservation: Not Supported 00:08:20.666 Namespace Sharing Capabilities: Private 00:08:20.666 Size (in LBAs): 1048576 (4GiB) 00:08:20.666 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.666 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.666 Thin Provisioning: Not Supported 00:08:20.666 Per-NS Atomic Units: No 00:08:20.666 Maximum Single Source Range Length: 128 00:08:20.666 Maximum Copy Length: 128 00:08:20.666 Maximum Source Range Count: 128 00:08:20.666 NGUID/EUI64 Never Reused: No 00:08:20.666 Namespace Write Protected: No 00:08:20.666 Number of LBA Formats: 8 00:08:20.666 Current LBA Format: LBA Format #04 00:08:20.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.666 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.666 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.666 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.666 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.666 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.666 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.666 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.666 00:08:20.666 NVM Specific Namespace Data 00:08:20.666 =========================== 00:08:20.666 Logical Block Storage Tag Mask: 0 00:08:20.666 Protection Information Capabilities: 00:08:20.666 16b Guard Protection Information Storage Tag Support: No 00:08:20.666 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.666 Storage Tag Check Read Support: No 00:08:20.666 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.666 Namespace ID:2 00:08:20.666 Error Recovery Timeout: Unlimited 00:08:20.666 Command Set Identifier: NVM (00h) 00:08:20.666 Deallocate: Supported 00:08:20.666 Deallocated/Unwritten Error: Supported 00:08:20.666 Deallocated Read Value: All 0x00 00:08:20.666 Deallocate in Write Zeroes: Not Supported 00:08:20.666 Deallocated Guard Field: 0xFFFF 00:08:20.666 Flush: Supported 00:08:20.666 Reservation: Not Supported 00:08:20.666 Namespace Sharing Capabilities: Private 00:08:20.666 Size (in LBAs): 1048576 (4GiB) 00:08:20.666 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.666 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.666 Thin Provisioning: Not Supported 00:08:20.667 Per-NS Atomic Units: No 00:08:20.667 Maximum Single Source Range Length: 128 00:08:20.667 Maximum Copy Length: 128 00:08:20.667 Maximum Source Range Count: 128 00:08:20.667 NGUID/EUI64 Never Reused: No 00:08:20.667 Namespace Write Protected: No 00:08:20.667 Number of LBA Formats: 8 00:08:20.667 Current LBA Format: LBA Format #04 00:08:20.667 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.667 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.667 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.667 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.667 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.667 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.667 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.667 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.667 00:08:20.667 NVM Specific Namespace Data 00:08:20.667 =========================== 00:08:20.667 Logical Block Storage Tag Mask: 0 00:08:20.667 Protection Information Capabilities: 00:08:20.667 16b Guard Protection Information Storage Tag Support: No 00:08:20.667 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.667 Storage Tag Check Read Support: No 00:08:20.667 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.667 Namespace ID:3 00:08:20.667 Error Recovery Timeout: Unlimited 00:08:20.667 Command Set Identifier: NVM (00h) 00:08:20.667 Deallocate: Supported 00:08:20.667 Deallocated/Unwritten Error: Supported 00:08:20.667 Deallocated Read Value: All 0x00 00:08:20.667 Deallocate in Write Zeroes: Not Supported 00:08:20.667 Deallocated Guard Field: 0xFFFF 00:08:20.667 Flush: Supported 00:08:20.667 Reservation: Not Supported 00:08:20.667 Namespace Sharing Capabilities: Private 00:08:20.667 Size (in LBAs): 1048576 (4GiB) 00:08:20.667 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.667 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.667 Thin Provisioning: Not Supported 00:08:20.667 Per-NS Atomic Units: No 00:08:20.667 Maximum Single Source Range Length: 128 00:08:20.667 Maximum Copy Length: 128 00:08:20.667 Maximum Source Range Count: 128 00:08:20.667 NGUID/EUI64 Never Reused: No 00:08:20.667 Namespace Write Protected: No 00:08:20.667 Number of LBA Formats: 8 00:08:20.667 Current LBA Format: LBA Format #04 00:08:20.667 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.667 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.667 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.667 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.667 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.667 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.667 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.667 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.667 00:08:20.667 NVM Specific Namespace Data 00:08:20.667 =========================== 00:08:20.667 Logical Block Storage Tag Mask: 0 00:08:20.667 Protection Information Capabilities: 00:08:20.667 16b Guard Protection Information Storage Tag Support: No 00:08:20.667 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.926 Storage Tag Check Read Support: No 00:08:20.926 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.926 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.926 00:40:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:20.926 ===================================================== 00:08:20.926 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.926 ===================================================== 00:08:20.926 Controller Capabilities/Features 00:08:20.926 ================================ 00:08:20.926 Vendor ID: 1b36 00:08:20.926 Subsystem Vendor ID: 1af4 00:08:20.926 Serial Number: 12343 00:08:20.926 Model Number: QEMU NVMe Ctrl 00:08:20.926 Firmware Version: 8.0.0 00:08:20.926 Recommended Arb Burst: 6 00:08:20.926 IEEE OUI Identifier: 00 54 52 00:08:20.926 Multi-path I/O 00:08:20.926 May have multiple subsystem ports: No 00:08:20.926 May have multiple controllers: Yes 00:08:20.926 Associated with SR-IOV VF: No 00:08:20.926 Max Data Transfer Size: 524288 00:08:20.926 Max Number of Namespaces: 256 00:08:20.926 Max Number of I/O Queues: 64 00:08:20.926 NVMe Specification Version (VS): 1.4 00:08:20.926 NVMe Specification Version (Identify): 1.4 00:08:20.926 Maximum Queue Entries: 2048 00:08:20.926 Contiguous Queues Required: Yes 00:08:20.926 Arbitration Mechanisms Supported 00:08:20.926 Weighted Round Robin: Not Supported 00:08:20.926 Vendor Specific: Not Supported 00:08:20.926 Reset Timeout: 7500 ms 00:08:20.926 Doorbell Stride: 4 bytes 00:08:20.926 NVM Subsystem Reset: Not Supported 00:08:20.926 Command Sets Supported 00:08:20.926 NVM Command Set: Supported 00:08:20.926 Boot Partition: Not Supported 00:08:20.926 Memory Page Size Minimum: 4096 bytes 00:08:20.926 Memory Page Size Maximum: 65536 bytes 00:08:20.926 Persistent Memory Region: Not Supported 00:08:20.926 Optional Asynchronous Events Supported 00:08:20.926 Namespace Attribute Notices: Supported 00:08:20.926 Firmware Activation Notices: Not Supported 00:08:20.926 ANA Change Notices: Not Supported 00:08:20.926 PLE Aggregate Log Change Notices: Not Supported 00:08:20.926 LBA Status Info Alert Notices: Not Supported 00:08:20.926 EGE Aggregate Log Change Notices: Not Supported 00:08:20.926 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.926 Zone Descriptor Change Notices: Not Supported 00:08:20.926 Discovery Log Change Notices: Not Supported 00:08:20.926 Controller Attributes 00:08:20.926 128-bit Host Identifier: Not Supported 00:08:20.926 Non-Operational Permissive Mode: Not Supported 00:08:20.926 NVM Sets: Not Supported 00:08:20.926 Read Recovery Levels: Not Supported 00:08:20.926 Endurance Groups: Supported 00:08:20.926 Predictable Latency Mode: Not Supported 00:08:20.926 Traffic Based Keep ALive: Not Supported 00:08:20.926 Namespace Granularity: Not Supported 00:08:20.926 SQ Associations: Not Supported 00:08:20.926 UUID List: Not Supported 00:08:20.926 Multi-Domain Subsystem: Not Supported 00:08:20.926 Fixed Capacity Management: Not Supported 00:08:20.926 Variable Capacity Management: Not Supported 00:08:20.926 Delete Endurance Group: Not Supported 00:08:20.926 Delete NVM Set: Not Supported 00:08:20.926 Extended LBA Formats Supported: Supported 00:08:20.926 Flexible Data Placement Supported: Supported 00:08:20.926 00:08:20.926 Controller Memory Buffer Support 00:08:20.926 ================================ 00:08:20.926 Supported: No 00:08:20.926 00:08:20.926 Persistent Memory Region Support 00:08:20.926 ================================ 00:08:20.926 Supported: No 00:08:20.926 00:08:20.926 Admin Command Set Attributes 00:08:20.926 ============================ 00:08:20.926 Security Send/Receive: Not Supported 00:08:20.926 Format NVM: Supported 00:08:20.926 Firmware Activate/Download: Not Supported 00:08:20.926 Namespace Management: Supported 00:08:20.926 Device Self-Test: Not Supported 00:08:20.926 Directives: Supported 00:08:20.926 NVMe-MI: Not Supported 00:08:20.926 Virtualization Management: Not Supported 00:08:20.926 Doorbell Buffer Config: Supported 00:08:20.926 Get LBA Status Capability: Not Supported 00:08:20.926 Command & Feature Lockdown Capability: Not Supported 00:08:20.926 Abort Command Limit: 4 00:08:20.926 Async Event Request Limit: 4 00:08:20.926 Number of Firmware Slots: N/A 00:08:20.926 Firmware Slot 1 Read-Only: N/A 00:08:20.926 Firmware Activation Without Reset: N/A 00:08:20.926 Multiple Update Detection Support: N/A 00:08:20.926 Firmware Update Granularity: No Information Provided 00:08:20.926 Per-Namespace SMART Log: Yes 00:08:20.926 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.926 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:20.926 Command Effects Log Page: Supported 00:08:20.926 Get Log Page Extended Data: Supported 00:08:20.926 Telemetry Log Pages: Not Supported 00:08:20.926 Persistent Event Log Pages: Not Supported 00:08:20.926 Supported Log Pages Log Page: May Support 00:08:20.926 Commands Supported & Effects Log Page: Not Supported 00:08:20.926 Feature Identifiers & Effects Log Page:May Support 00:08:20.926 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.926 Data Area 4 for Telemetry Log: Not Supported 00:08:20.926 Error Log Page Entries Supported: 1 00:08:20.926 Keep Alive: Not Supported 00:08:20.926 00:08:20.926 NVM Command Set Attributes 00:08:20.926 ========================== 00:08:20.926 Submission Queue Entry Size 00:08:20.926 Max: 64 00:08:20.926 Min: 64 00:08:20.926 Completion Queue Entry Size 00:08:20.926 Max: 16 00:08:20.926 Min: 16 00:08:20.926 Number of Namespaces: 256 00:08:20.926 Compare Command: Supported 00:08:20.926 Write Uncorrectable Command: Not Supported 00:08:20.926 Dataset Management Command: Supported 00:08:20.926 Write Zeroes Command: Supported 00:08:20.926 Set Features Save Field: Supported 00:08:20.926 Reservations: Not Supported 00:08:20.926 Timestamp: Supported 00:08:20.926 Copy: Supported 00:08:20.926 Volatile Write Cache: Present 00:08:20.926 Atomic Write Unit (Normal): 1 00:08:20.926 Atomic Write Unit (PFail): 1 00:08:20.926 Atomic Compare & Write Unit: 1 00:08:20.926 Fused Compare & Write: Not Supported 00:08:20.926 Scatter-Gather List 00:08:20.926 SGL Command Set: Supported 00:08:20.926 SGL Keyed: Not Supported 00:08:20.926 SGL Bit Bucket Descriptor: Not Supported 00:08:20.926 SGL Metadata Pointer: Not Supported 00:08:20.926 Oversized SGL: Not Supported 00:08:20.926 SGL Metadata Address: Not Supported 00:08:20.926 SGL Offset: Not Supported 00:08:20.926 Transport SGL Data Block: Not Supported 00:08:20.926 Replay Protected Memory Block: Not Supported 00:08:20.926 00:08:20.926 Firmware Slot Information 00:08:20.926 ========================= 00:08:20.926 Active slot: 1 00:08:20.927 Slot 1 Firmware Revision: 1.0 00:08:20.927 00:08:20.927 00:08:20.927 Commands Supported and Effects 00:08:20.927 ============================== 00:08:20.927 Admin Commands 00:08:20.927 -------------- 00:08:20.927 Delete I/O Submission Queue (00h): Supported 00:08:20.927 Create I/O Submission Queue (01h): Supported 00:08:20.927 Get Log Page (02h): Supported 00:08:20.927 Delete I/O Completion Queue (04h): Supported 00:08:20.927 Create I/O Completion Queue (05h): Supported 00:08:20.927 Identify (06h): Supported 00:08:20.927 Abort (08h): Supported 00:08:20.927 Set Features (09h): Supported 00:08:20.927 Get Features (0Ah): Supported 00:08:20.927 Asynchronous Event Request (0Ch): Supported 00:08:20.927 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.927 Directive Send (19h): Supported 00:08:20.927 Directive Receive (1Ah): Supported 00:08:20.927 Virtualization Management (1Ch): Supported 00:08:20.927 Doorbell Buffer Config (7Ch): Supported 00:08:20.927 Format NVM (80h): Supported LBA-Change 00:08:20.927 I/O Commands 00:08:20.927 ------------ 00:08:20.927 Flush (00h): Supported LBA-Change 00:08:20.927 Write (01h): Supported LBA-Change 00:08:20.927 Read (02h): Supported 00:08:20.927 Compare (05h): Supported 00:08:20.927 Write Zeroes (08h): Supported LBA-Change 00:08:20.927 Dataset Management (09h): Supported LBA-Change 00:08:20.927 Unknown (0Ch): Supported 00:08:20.927 Unknown (12h): Supported 00:08:20.927 Copy (19h): Supported LBA-Change 00:08:20.927 Unknown (1Dh): Supported LBA-Change 00:08:20.927 00:08:20.927 Error Log 00:08:20.927 ========= 00:08:20.927 00:08:20.927 Arbitration 00:08:20.927 =========== 00:08:20.927 Arbitration Burst: no limit 00:08:20.927 00:08:20.927 Power Management 00:08:20.927 ================ 00:08:20.927 Number of Power States: 1 00:08:20.927 Current Power State: Power State #0 00:08:20.927 Power State #0: 00:08:20.927 Max Power: 25.00 W 00:08:20.927 Non-Operational State: Operational 00:08:20.927 Entry Latency: 16 microseconds 00:08:20.927 Exit Latency: 4 microseconds 00:08:20.927 Relative Read Throughput: 0 00:08:20.927 Relative Read Latency: 0 00:08:20.927 Relative Write Throughput: 0 00:08:20.927 Relative Write Latency: 0 00:08:20.927 Idle Power: Not Reported 00:08:20.927 Active Power: Not Reported 00:08:20.927 Non-Operational Permissive Mode: Not Supported 00:08:20.927 00:08:20.927 Health Information 00:08:20.927 ================== 00:08:20.927 Critical Warnings: 00:08:20.927 Available Spare Space: OK 00:08:20.927 Temperature: OK 00:08:20.927 Device Reliability: OK 00:08:20.927 Read Only: No 00:08:20.927 Volatile Memory Backup: OK 00:08:20.927 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.927 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.927 Available Spare: 0% 00:08:20.927 Available Spare Threshold: 0% 00:08:20.927 Life Percentage Used: 0% 00:08:20.927 Data Units Read: 850 00:08:20.927 Data Units Written: 779 00:08:20.927 Host Read Commands: 35097 00:08:20.927 Host Write Commands: 34520 00:08:20.927 Controller Busy Time: 0 minutes 00:08:20.927 Power Cycles: 0 00:08:20.927 Power On Hours: 0 hours 00:08:20.927 Unsafe Shutdowns: 0 00:08:20.927 Unrecoverable Media Errors: 0 00:08:20.927 Lifetime Error Log Entries: 0 00:08:20.927 Warning Temperature Time: 0 minutes 00:08:20.927 Critical Temperature Time: 0 minutes 00:08:20.927 00:08:20.927 Number of Queues 00:08:20.927 ================ 00:08:20.927 Number of I/O Submission Queues: 64 00:08:20.927 Number of I/O Completion Queues: 64 00:08:20.927 00:08:20.927 ZNS Specific Controller Data 00:08:20.927 ============================ 00:08:20.927 Zone Append Size Limit: 0 00:08:20.927 00:08:20.927 00:08:20.927 Active Namespaces 00:08:20.927 ================= 00:08:20.927 Namespace ID:1 00:08:20.927 Error Recovery Timeout: Unlimited 00:08:20.927 Command Set Identifier: NVM (00h) 00:08:20.927 Deallocate: Supported 00:08:20.927 Deallocated/Unwritten Error: Supported 00:08:20.927 Deallocated Read Value: All 0x00 00:08:20.927 Deallocate in Write Zeroes: Not Supported 00:08:20.927 Deallocated Guard Field: 0xFFFF 00:08:20.927 Flush: Supported 00:08:20.927 Reservation: Not Supported 00:08:20.927 Namespace Sharing Capabilities: Multiple Controllers 00:08:20.927 Size (in LBAs): 262144 (1GiB) 00:08:20.927 Capacity (in LBAs): 262144 (1GiB) 00:08:20.927 Utilization (in LBAs): 262144 (1GiB) 00:08:20.927 Thin Provisioning: Not Supported 00:08:20.927 Per-NS Atomic Units: No 00:08:20.927 Maximum Single Source Range Length: 128 00:08:20.927 Maximum Copy Length: 128 00:08:20.927 Maximum Source Range Count: 128 00:08:20.927 NGUID/EUI64 Never Reused: No 00:08:20.927 Namespace Write Protected: No 00:08:20.927 Endurance group ID: 1 00:08:20.927 Number of LBA Formats: 8 00:08:20.927 Current LBA Format: LBA Format #04 00:08:20.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.927 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.927 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.927 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.927 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.927 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.927 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.927 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.927 00:08:20.927 Get Feature FDP: 00:08:20.927 ================ 00:08:20.927 Enabled: Yes 00:08:20.927 FDP configuration index: 0 00:08:20.927 00:08:20.927 FDP configurations log page 00:08:20.927 =========================== 00:08:20.927 Number of FDP configurations: 1 00:08:20.927 Version: 0 00:08:20.927 Size: 112 00:08:20.927 FDP Configuration Descriptor: 0 00:08:20.927 Descriptor Size: 96 00:08:20.927 Reclaim Group Identifier format: 2 00:08:20.927 FDP Volatile Write Cache: Not Present 00:08:20.927 FDP Configuration: Valid 00:08:20.927 Vendor Specific Size: 0 00:08:20.927 Number of Reclaim Groups: 2 00:08:20.927 Number of Recalim Unit Handles: 8 00:08:20.927 Max Placement Identifiers: 128 00:08:20.927 Number of Namespaces Suppprted: 256 00:08:20.927 Reclaim unit Nominal Size: 6000000 bytes 00:08:20.927 Estimated Reclaim Unit Time Limit: Not Reported 00:08:20.927 RUH Desc #000: RUH Type: Initially Isolated 00:08:20.927 RUH Desc #001: RUH Type: Initially Isolated 00:08:20.927 RUH Desc #002: RUH Type: Initially Isolated 00:08:20.927 RUH Desc #003: RUH Type: Initially Isolated 00:08:20.927 RUH Desc #004: RUH Type: Initially Isolated 00:08:20.927 RUH Desc #005: RUH Type: Initially Isolated 00:08:20.927 RUH Desc #006: RUH Type: Initially Isolated 00:08:20.927 RUH Desc #007: RUH Type: Initially Isolated 00:08:20.927 00:08:20.927 FDP reclaim unit handle usage log page 00:08:20.927 ====================================== 00:08:20.927 Number of Reclaim Unit Handles: 8 00:08:20.927 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:20.927 RUH Usage Desc #001: RUH Attributes: Unused 00:08:20.927 RUH Usage Desc #002: RUH Attributes: Unused 00:08:20.927 RUH Usage Desc #003: RUH Attributes: Unused 00:08:20.927 RUH Usage Desc #004: RUH Attributes: Unused 00:08:20.927 RUH Usage Desc #005: RUH Attributes: Unused 00:08:20.927 RUH Usage Desc #006: RUH Attributes: Unused 00:08:20.927 RUH Usage Desc #007: RUH Attributes: Unused 00:08:20.927 00:08:20.927 FDP statistics log page 00:08:20.927 ======================= 00:08:20.927 Host bytes with metadata written: 456761344 00:08:20.927 Media bytes with metadata written: 456835072 00:08:20.927 Media bytes erased: 0 00:08:20.927 00:08:20.927 FDP events log page 00:08:20.927 =================== 00:08:20.927 Number of FDP events: 0 00:08:20.927 00:08:20.927 NVM Specific Namespace Data 00:08:20.927 =========================== 00:08:20.927 Logical Block Storage Tag Mask: 0 00:08:20.927 Protection Information Capabilities: 00:08:20.927 16b Guard Protection Information Storage Tag Support: No 00:08:20.927 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.927 Storage Tag Check Read Support: No 00:08:20.927 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.927 00:08:20.927 real 0m1.038s 00:08:20.927 user 0m0.329s 00:08:20.928 sys 0m0.506s 00:08:20.928 00:40:12 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.928 00:40:12 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:20.928 ************************************ 00:08:20.928 END TEST nvme_identify 00:08:20.928 ************************************ 00:08:20.928 00:40:12 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:20.928 00:40:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.928 00:40:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.928 00:40:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.185 ************************************ 00:08:21.185 START TEST nvme_perf 00:08:21.185 ************************************ 00:08:21.185 00:40:12 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:21.185 00:40:12 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:22.118 Initializing NVMe Controllers 00:08:22.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:22.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:22.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:22.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:22.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:22.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:22.118 Initialization complete. Launching workers. 00:08:22.118 ======================================================== 00:08:22.118 Latency(us) 00:08:22.118 Device Information : IOPS MiB/s Average min max 00:08:22.118 PCIE (0000:00:10.0) NSID 1 from core 0: 17045.95 199.76 7511.59 4700.87 30365.27 00:08:22.118 PCIE (0000:00:11.0) NSID 1 from core 0: 17045.95 199.76 7505.78 4533.56 29568.79 00:08:22.118 PCIE (0000:00:13.0) NSID 1 from core 0: 17045.95 199.76 7498.65 4085.03 30074.89 00:08:22.118 PCIE (0000:00:12.0) NSID 1 from core 0: 17045.95 199.76 7491.19 3844.89 29786.87 00:08:22.118 PCIE (0000:00:12.0) NSID 2 from core 0: 17045.95 199.76 7483.80 3660.40 29444.46 00:08:22.118 PCIE (0000:00:12.0) NSID 3 from core 0: 17109.79 200.51 7448.68 3539.76 23217.00 00:08:22.118 ======================================================== 00:08:22.118 Total : 102339.53 1199.29 7489.93 3539.76 30365.27 00:08:22.118 00:08:22.118 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.118 ================================================================================= 00:08:22.118 1.00000% : 5973.858us 00:08:22.118 10.00000% : 6125.095us 00:08:22.118 25.00000% : 6377.157us 00:08:22.118 50.00000% : 6755.249us 00:08:22.118 75.00000% : 8267.618us 00:08:22.118 90.00000% : 9427.102us 00:08:22.118 95.00000% : 11241.945us 00:08:22.118 98.00000% : 13107.200us 00:08:22.118 99.00000% : 13611.323us 00:08:22.118 99.50000% : 24903.680us 00:08:22.118 99.90000% : 30045.735us 00:08:22.118 99.99000% : 30449.034us 00:08:22.118 99.99900% : 30449.034us 00:08:22.118 99.99990% : 30449.034us 00:08:22.118 99.99999% : 30449.034us 00:08:22.118 00:08:22.118 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.118 ================================================================================= 00:08:22.118 1.00000% : 6049.477us 00:08:22.118 10.00000% : 6200.714us 00:08:22.118 25.00000% : 6402.363us 00:08:22.118 50.00000% : 6755.249us 00:08:22.118 75.00000% : 8267.618us 00:08:22.118 90.00000% : 9376.689us 00:08:22.118 95.00000% : 11241.945us 00:08:22.118 98.00000% : 13208.025us 00:08:22.118 99.00000% : 13611.323us 00:08:22.118 99.50000% : 24197.908us 00:08:22.118 99.90000% : 29440.788us 00:08:22.118 99.99000% : 29642.437us 00:08:22.118 99.99900% : 29642.437us 00:08:22.118 99.99990% : 29642.437us 00:08:22.118 99.99999% : 29642.437us 00:08:22.118 00:08:22.118 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.118 ================================================================================= 00:08:22.118 1.00000% : 6049.477us 00:08:22.118 10.00000% : 6175.508us 00:08:22.118 25.00000% : 6402.363us 00:08:22.118 50.00000% : 6755.249us 00:08:22.118 75.00000% : 8217.206us 00:08:22.118 90.00000% : 9376.689us 00:08:22.118 95.00000% : 10989.883us 00:08:22.118 98.00000% : 13308.849us 00:08:22.118 99.00000% : 13712.148us 00:08:22.118 99.50000% : 24097.083us 00:08:22.118 99.90000% : 29844.086us 00:08:22.118 99.99000% : 30247.385us 00:08:22.118 99.99900% : 30247.385us 00:08:22.118 99.99990% : 30247.385us 00:08:22.118 99.99999% : 30247.385us 00:08:22.118 00:08:22.118 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.118 ================================================================================= 00:08:22.118 1.00000% : 6049.477us 00:08:22.118 10.00000% : 6175.508us 00:08:22.118 25.00000% : 6402.363us 00:08:22.118 50.00000% : 6755.249us 00:08:22.118 75.00000% : 8217.206us 00:08:22.118 90.00000% : 9376.689us 00:08:22.118 95.00000% : 10939.471us 00:08:22.118 98.00000% : 13308.849us 00:08:22.118 99.00000% : 13712.148us 00:08:22.118 99.50000% : 23492.135us 00:08:22.118 99.90000% : 29642.437us 00:08:22.118 99.99000% : 29844.086us 00:08:22.118 99.99900% : 29844.086us 00:08:22.118 99.99990% : 29844.086us 00:08:22.118 99.99999% : 29844.086us 00:08:22.118 00:08:22.118 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.118 ================================================================================= 00:08:22.118 1.00000% : 6049.477us 00:08:22.118 10.00000% : 6175.508us 00:08:22.118 25.00000% : 6402.363us 00:08:22.118 50.00000% : 6755.249us 00:08:22.118 75.00000% : 8267.618us 00:08:22.118 90.00000% : 9376.689us 00:08:22.118 95.00000% : 11393.182us 00:08:22.118 98.00000% : 13107.200us 00:08:22.118 99.00000% : 13510.498us 00:08:22.118 99.50000% : 22786.363us 00:08:22.118 99.90000% : 29239.138us 00:08:22.118 99.99000% : 29440.788us 00:08:22.118 99.99900% : 29642.437us 00:08:22.118 99.99990% : 29642.437us 00:08:22.118 99.99999% : 29642.437us 00:08:22.118 00:08:22.118 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.118 ================================================================================= 00:08:22.118 1.00000% : 6049.477us 00:08:22.118 10.00000% : 6175.508us 00:08:22.118 25.00000% : 6402.363us 00:08:22.118 50.00000% : 6755.249us 00:08:22.118 75.00000% : 8267.618us 00:08:22.118 90.00000% : 9376.689us 00:08:22.118 95.00000% : 11191.532us 00:08:22.118 98.00000% : 13107.200us 00:08:22.118 99.00000% : 13510.498us 00:08:22.118 99.50000% : 18047.606us 00:08:22.118 99.90000% : 22988.012us 00:08:22.118 99.99000% : 23290.486us 00:08:22.118 99.99900% : 23290.486us 00:08:22.118 99.99990% : 23290.486us 00:08:22.118 99.99999% : 23290.486us 00:08:22.118 00:08:22.118 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.118 ============================================================================== 00:08:22.118 Range in us Cumulative IO count 00:08:22.118 4688.345 - 4713.551: 0.0059% ( 1) 00:08:22.118 4713.551 - 4738.757: 0.0234% ( 3) 00:08:22.118 4738.757 - 4763.963: 0.0351% ( 2) 00:08:22.118 4763.963 - 4789.169: 0.0410% ( 1) 00:08:22.118 4789.169 - 4814.375: 0.0527% ( 2) 00:08:22.118 4814.375 - 4839.582: 0.0702% ( 3) 00:08:22.118 4839.582 - 4864.788: 0.0819% ( 2) 00:08:22.118 4864.788 - 4889.994: 0.0936% ( 2) 00:08:22.118 4889.994 - 4915.200: 0.1053% ( 2) 00:08:22.118 4915.200 - 4940.406: 0.1112% ( 1) 00:08:22.118 4940.406 - 4965.612: 0.1170% ( 1) 00:08:22.118 4965.612 - 4990.818: 0.1346% ( 3) 00:08:22.118 4990.818 - 5016.025: 0.1404% ( 1) 00:08:22.118 5016.025 - 5041.231: 0.1580% ( 3) 00:08:22.119 5041.231 - 5066.437: 0.1697% ( 2) 00:08:22.119 5066.437 - 5091.643: 0.1814% ( 2) 00:08:22.119 5091.643 - 5116.849: 0.1873% ( 1) 00:08:22.119 5116.849 - 5142.055: 0.1990% ( 2) 00:08:22.119 5142.055 - 5167.262: 0.2107% ( 2) 00:08:22.119 5167.262 - 5192.468: 0.2224% ( 2) 00:08:22.119 5192.468 - 5217.674: 0.2341% ( 2) 00:08:22.119 5217.674 - 5242.880: 0.2458% ( 2) 00:08:22.119 5242.880 - 5268.086: 0.2516% ( 1) 00:08:22.119 5268.086 - 5293.292: 0.2633% ( 2) 00:08:22.119 5293.292 - 5318.498: 0.2692% ( 1) 00:08:22.119 5318.498 - 5343.705: 0.2868% ( 3) 00:08:22.119 5343.705 - 5368.911: 0.2985% ( 2) 00:08:22.119 5368.911 - 5394.117: 0.3102% ( 2) 00:08:22.119 5394.117 - 5419.323: 0.3219% ( 2) 00:08:22.119 5419.323 - 5444.529: 0.3336% ( 2) 00:08:22.119 5444.529 - 5469.735: 0.3453% ( 2) 00:08:22.119 5469.735 - 5494.942: 0.3511% ( 1) 00:08:22.119 5494.942 - 5520.148: 0.3628% ( 2) 00:08:22.119 5520.148 - 5545.354: 0.3745% ( 2) 00:08:22.119 5873.034 - 5898.240: 0.3804% ( 1) 00:08:22.119 5898.240 - 5923.446: 0.4155% ( 6) 00:08:22.119 5923.446 - 5948.652: 0.5911% ( 30) 00:08:22.119 5948.652 - 5973.858: 1.2699% ( 116) 00:08:22.119 5973.858 - 5999.065: 2.3876% ( 191) 00:08:22.119 5999.065 - 6024.271: 3.8038% ( 242) 00:08:22.119 6024.271 - 6049.477: 5.5419% ( 297) 00:08:22.119 6049.477 - 6074.683: 7.1863% ( 281) 00:08:22.119 6074.683 - 6099.889: 8.7722% ( 271) 00:08:22.119 6099.889 - 6125.095: 10.2821% ( 258) 00:08:22.119 6125.095 - 6150.302: 11.9324% ( 282) 00:08:22.119 6150.302 - 6175.508: 13.4246% ( 255) 00:08:22.119 6175.508 - 6200.714: 15.0105% ( 271) 00:08:22.119 6200.714 - 6225.920: 16.5555% ( 264) 00:08:22.119 6225.920 - 6251.126: 18.1238% ( 268) 00:08:22.119 6251.126 - 6276.332: 19.7097% ( 271) 00:08:22.119 6276.332 - 6301.538: 21.2839% ( 269) 00:08:22.119 6301.538 - 6326.745: 22.9401% ( 283) 00:08:22.119 6326.745 - 6351.951: 24.4792% ( 263) 00:08:22.119 6351.951 - 6377.157: 26.0943% ( 276) 00:08:22.119 6377.157 - 6402.363: 27.7271% ( 279) 00:08:22.119 6402.363 - 6427.569: 29.2544% ( 261) 00:08:22.119 6427.569 - 6452.775: 30.9515% ( 290) 00:08:22.119 6452.775 - 6503.188: 34.1760% ( 551) 00:08:22.119 6503.188 - 6553.600: 37.3771% ( 547) 00:08:22.119 6553.600 - 6604.012: 40.6543% ( 560) 00:08:22.119 6604.012 - 6654.425: 43.8261% ( 542) 00:08:22.119 6654.425 - 6704.837: 47.1091% ( 561) 00:08:22.119 6704.837 - 6755.249: 50.3979% ( 562) 00:08:22.119 6755.249 - 6805.662: 53.6868% ( 562) 00:08:22.119 6805.662 - 6856.074: 56.9991% ( 566) 00:08:22.119 6856.074 - 6906.486: 60.3289% ( 569) 00:08:22.119 6906.486 - 6956.898: 63.5592% ( 552) 00:08:22.119 6956.898 - 7007.311: 66.7135% ( 539) 00:08:22.119 7007.311 - 7057.723: 69.0016% ( 391) 00:08:22.119 7057.723 - 7108.135: 69.8268% ( 141) 00:08:22.119 7108.135 - 7158.548: 70.1955% ( 63) 00:08:22.119 7158.548 - 7208.960: 70.4588% ( 45) 00:08:22.119 7208.960 - 7259.372: 70.6285% ( 29) 00:08:22.119 7259.372 - 7309.785: 70.7982% ( 29) 00:08:22.119 7309.785 - 7360.197: 71.0440% ( 42) 00:08:22.119 7360.197 - 7410.609: 71.2547% ( 36) 00:08:22.119 7410.609 - 7461.022: 71.4244% ( 29) 00:08:22.119 7461.022 - 7511.434: 71.6292% ( 35) 00:08:22.119 7511.434 - 7561.846: 71.8399% ( 36) 00:08:22.119 7561.846 - 7612.258: 72.0272% ( 32) 00:08:22.119 7612.258 - 7662.671: 72.2320% ( 35) 00:08:22.119 7662.671 - 7713.083: 72.4778% ( 42) 00:08:22.119 7713.083 - 7763.495: 72.6416% ( 28) 00:08:22.119 7763.495 - 7813.908: 72.8640% ( 38) 00:08:22.119 7813.908 - 7864.320: 73.1156% ( 43) 00:08:22.119 7864.320 - 7914.732: 73.3731% ( 44) 00:08:22.119 7914.732 - 7965.145: 73.6189% ( 42) 00:08:22.119 7965.145 - 8015.557: 73.8530% ( 40) 00:08:22.119 8015.557 - 8065.969: 74.0988% ( 42) 00:08:22.119 8065.969 - 8116.382: 74.3738% ( 47) 00:08:22.119 8116.382 - 8166.794: 74.6489% ( 47) 00:08:22.119 8166.794 - 8217.206: 74.9239% ( 47) 00:08:22.119 8217.206 - 8267.618: 75.2399% ( 54) 00:08:22.119 8267.618 - 8318.031: 75.6496% ( 70) 00:08:22.119 8318.031 - 8368.443: 76.0768% ( 73) 00:08:22.119 8368.443 - 8418.855: 76.5566% ( 82) 00:08:22.119 8418.855 - 8469.268: 77.0482% ( 84) 00:08:22.119 8469.268 - 8519.680: 77.5925% ( 93) 00:08:22.119 8519.680 - 8570.092: 78.1835% ( 101) 00:08:22.119 8570.092 - 8620.505: 78.8799% ( 119) 00:08:22.119 8620.505 - 8670.917: 79.6114% ( 125) 00:08:22.119 8670.917 - 8721.329: 80.3839% ( 132) 00:08:22.119 8721.329 - 8771.742: 81.1505% ( 131) 00:08:22.119 8771.742 - 8822.154: 81.8762% ( 124) 00:08:22.119 8822.154 - 8872.566: 82.6369% ( 130) 00:08:22.119 8872.566 - 8922.978: 83.3919% ( 129) 00:08:22.119 8922.978 - 8973.391: 84.1526% ( 130) 00:08:22.119 8973.391 - 9023.803: 84.8666% ( 122) 00:08:22.119 9023.803 - 9074.215: 85.5805% ( 122) 00:08:22.119 9074.215 - 9124.628: 86.2711% ( 118) 00:08:22.119 9124.628 - 9175.040: 86.9558% ( 117) 00:08:22.119 9175.040 - 9225.452: 87.6814% ( 124) 00:08:22.119 9225.452 - 9275.865: 88.3603% ( 116) 00:08:22.119 9275.865 - 9326.277: 88.9630% ( 103) 00:08:22.119 9326.277 - 9376.689: 89.6301% ( 114) 00:08:22.119 9376.689 - 9427.102: 90.1568% ( 90) 00:08:22.119 9427.102 - 9477.514: 90.6894% ( 91) 00:08:22.119 9477.514 - 9527.926: 91.1341% ( 76) 00:08:22.119 9527.926 - 9578.338: 91.5906% ( 78) 00:08:22.119 9578.338 - 9628.751: 91.9885% ( 68) 00:08:22.119 9628.751 - 9679.163: 92.3162% ( 56) 00:08:22.119 9679.163 - 9729.575: 92.5620% ( 42) 00:08:22.119 9729.575 - 9779.988: 92.7376% ( 30) 00:08:22.119 9779.988 - 9830.400: 92.9073% ( 29) 00:08:22.119 9830.400 - 9880.812: 93.0419% ( 23) 00:08:22.119 9880.812 - 9931.225: 93.1472% ( 18) 00:08:22.119 9931.225 - 9981.637: 93.2584% ( 19) 00:08:22.119 9981.637 - 10032.049: 93.3696% ( 19) 00:08:22.119 10032.049 - 10082.462: 93.4632% ( 16) 00:08:22.119 10082.462 - 10132.874: 93.5510% ( 15) 00:08:22.119 10132.874 - 10183.286: 93.6388% ( 15) 00:08:22.119 10183.286 - 10233.698: 93.7500% ( 19) 00:08:22.119 10233.698 - 10284.111: 93.8670% ( 20) 00:08:22.119 10284.111 - 10334.523: 93.9314% ( 11) 00:08:22.119 10334.523 - 10384.935: 94.0133% ( 14) 00:08:22.119 10384.935 - 10435.348: 94.1070% ( 16) 00:08:22.119 10435.348 - 10485.760: 94.1948% ( 15) 00:08:22.119 10485.760 - 10536.172: 94.2533% ( 10) 00:08:22.119 10536.172 - 10586.585: 94.3352% ( 14) 00:08:22.119 10586.585 - 10636.997: 94.3996% ( 11) 00:08:22.119 10636.997 - 10687.409: 94.4640% ( 11) 00:08:22.119 10687.409 - 10737.822: 94.5342% ( 12) 00:08:22.119 10737.822 - 10788.234: 94.5927% ( 10) 00:08:22.119 10788.234 - 10838.646: 94.6337% ( 7) 00:08:22.119 10838.646 - 10889.058: 94.6805% ( 8) 00:08:22.119 10889.058 - 10939.471: 94.7390% ( 10) 00:08:22.119 10939.471 - 10989.883: 94.7917% ( 9) 00:08:22.119 10989.883 - 11040.295: 94.8326% ( 7) 00:08:22.119 11040.295 - 11090.708: 94.8794% ( 8) 00:08:22.119 11090.708 - 11141.120: 94.9321% ( 9) 00:08:22.119 11141.120 - 11191.532: 94.9848% ( 9) 00:08:22.119 11191.532 - 11241.945: 95.0199% ( 6) 00:08:22.119 11241.945 - 11292.357: 95.0784% ( 10) 00:08:22.119 11292.357 - 11342.769: 95.1252% ( 8) 00:08:22.397 11342.769 - 11393.182: 95.1603% ( 6) 00:08:22.397 11393.182 - 11443.594: 95.1779% ( 3) 00:08:22.397 11443.594 - 11494.006: 95.1896% ( 2) 00:08:22.397 11494.006 - 11544.418: 95.2072% ( 3) 00:08:22.397 11544.418 - 11594.831: 95.2247% ( 3) 00:08:22.397 11594.831 - 11645.243: 95.2481% ( 4) 00:08:22.397 11645.243 - 11695.655: 95.2774% ( 5) 00:08:22.397 11695.655 - 11746.068: 95.3125% ( 6) 00:08:22.397 11746.068 - 11796.480: 95.3359% ( 4) 00:08:22.397 11796.480 - 11846.892: 95.3652% ( 5) 00:08:22.397 11846.892 - 11897.305: 95.4003% ( 6) 00:08:22.397 11897.305 - 11947.717: 95.4412% ( 7) 00:08:22.397 11947.717 - 11998.129: 95.4881% ( 8) 00:08:22.397 11998.129 - 12048.542: 95.5349% ( 8) 00:08:22.397 12048.542 - 12098.954: 95.5934% ( 10) 00:08:22.397 12098.954 - 12149.366: 95.6695% ( 13) 00:08:22.397 12149.366 - 12199.778: 95.7338% ( 11) 00:08:22.397 12199.778 - 12250.191: 95.8567% ( 21) 00:08:22.397 12250.191 - 12300.603: 95.9387% ( 14) 00:08:22.397 12300.603 - 12351.015: 96.0323% ( 16) 00:08:22.397 12351.015 - 12401.428: 96.1552% ( 21) 00:08:22.397 12401.428 - 12451.840: 96.2430% ( 15) 00:08:22.397 12451.840 - 12502.252: 96.3659% ( 21) 00:08:22.397 12502.252 - 12552.665: 96.5005% ( 23) 00:08:22.397 12552.665 - 12603.077: 96.6585% ( 27) 00:08:22.397 12603.077 - 12653.489: 96.8106% ( 26) 00:08:22.397 12653.489 - 12703.902: 96.9335% ( 21) 00:08:22.397 12703.902 - 12754.314: 97.0740% ( 24) 00:08:22.397 12754.314 - 12804.726: 97.2144% ( 24) 00:08:22.397 12804.726 - 12855.138: 97.3783% ( 28) 00:08:22.397 12855.138 - 12905.551: 97.5246% ( 25) 00:08:22.397 12905.551 - 13006.375: 97.8055% ( 48) 00:08:22.397 13006.375 - 13107.200: 98.0688% ( 45) 00:08:22.397 13107.200 - 13208.025: 98.3146% ( 42) 00:08:22.397 13208.025 - 13308.849: 98.5428% ( 39) 00:08:22.397 13308.849 - 13409.674: 98.7184% ( 30) 00:08:22.397 13409.674 - 13510.498: 98.8647% ( 25) 00:08:22.397 13510.498 - 13611.323: 99.0110% ( 25) 00:08:22.397 13611.323 - 13712.148: 99.1046% ( 16) 00:08:22.397 13712.148 - 13812.972: 99.1749% ( 12) 00:08:22.397 13812.972 - 13913.797: 99.2041% ( 5) 00:08:22.397 13913.797 - 14014.622: 99.2334% ( 5) 00:08:22.397 14014.622 - 14115.446: 99.2509% ( 3) 00:08:22.397 23996.258 - 24097.083: 99.2626% ( 2) 00:08:22.398 24097.083 - 24197.908: 99.2978% ( 6) 00:08:22.398 24197.908 - 24298.732: 99.3270% ( 5) 00:08:22.398 24298.732 - 24399.557: 99.3621% ( 6) 00:08:22.398 24399.557 - 24500.382: 99.3914% ( 5) 00:08:22.398 24500.382 - 24601.206: 99.4265% ( 6) 00:08:22.398 24601.206 - 24702.031: 99.4558% ( 5) 00:08:22.398 24702.031 - 24802.855: 99.4850% ( 5) 00:08:22.398 24802.855 - 24903.680: 99.5143% ( 5) 00:08:22.398 24903.680 - 25004.505: 99.5435% ( 5) 00:08:22.398 25004.505 - 25105.329: 99.5787% ( 6) 00:08:22.398 25105.329 - 25206.154: 99.6079% ( 5) 00:08:22.398 25206.154 - 25306.978: 99.6255% ( 3) 00:08:22.398 29037.489 - 29239.138: 99.6547% ( 5) 00:08:22.398 29239.138 - 29440.788: 99.7132% ( 10) 00:08:22.398 29440.788 - 29642.437: 99.7776% ( 11) 00:08:22.398 29642.437 - 29844.086: 99.8303% ( 9) 00:08:22.398 29844.086 - 30045.735: 99.9064% ( 13) 00:08:22.398 30045.735 - 30247.385: 99.9590% ( 9) 00:08:22.398 30247.385 - 30449.034: 100.0000% ( 7) 00:08:22.398 00:08:22.398 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.398 ============================================================================== 00:08:22.398 Range in us Cumulative IO count 00:08:22.398 4511.902 - 4537.108: 0.0059% ( 1) 00:08:22.398 4537.108 - 4562.314: 0.0176% ( 2) 00:08:22.398 4562.314 - 4587.520: 0.0293% ( 2) 00:08:22.398 4587.520 - 4612.726: 0.0410% ( 2) 00:08:22.398 4612.726 - 4637.932: 0.0527% ( 2) 00:08:22.398 4637.932 - 4663.138: 0.0702% ( 3) 00:08:22.398 4663.138 - 4688.345: 0.0761% ( 1) 00:08:22.398 4688.345 - 4713.551: 0.0936% ( 3) 00:08:22.398 4713.551 - 4738.757: 0.1053% ( 2) 00:08:22.398 4738.757 - 4763.963: 0.1170% ( 2) 00:08:22.398 4763.963 - 4789.169: 0.1287% ( 2) 00:08:22.398 4789.169 - 4814.375: 0.1404% ( 2) 00:08:22.398 4814.375 - 4839.582: 0.1522% ( 2) 00:08:22.398 4839.582 - 4864.788: 0.1639% ( 2) 00:08:22.398 4864.788 - 4889.994: 0.1756% ( 2) 00:08:22.398 4889.994 - 4915.200: 0.1873% ( 2) 00:08:22.398 4915.200 - 4940.406: 0.1990% ( 2) 00:08:22.398 4940.406 - 4965.612: 0.2107% ( 2) 00:08:22.398 4965.612 - 4990.818: 0.2224% ( 2) 00:08:22.398 4990.818 - 5016.025: 0.2399% ( 3) 00:08:22.398 5016.025 - 5041.231: 0.2516% ( 2) 00:08:22.398 5041.231 - 5066.437: 0.2633% ( 2) 00:08:22.398 5066.437 - 5091.643: 0.2750% ( 2) 00:08:22.398 5091.643 - 5116.849: 0.2868% ( 2) 00:08:22.398 5116.849 - 5142.055: 0.2985% ( 2) 00:08:22.398 5142.055 - 5167.262: 0.3102% ( 2) 00:08:22.398 5167.262 - 5192.468: 0.3219% ( 2) 00:08:22.398 5192.468 - 5217.674: 0.3394% ( 3) 00:08:22.398 5217.674 - 5242.880: 0.3511% ( 2) 00:08:22.398 5242.880 - 5268.086: 0.3628% ( 2) 00:08:22.398 5268.086 - 5293.292: 0.3745% ( 2) 00:08:22.398 5948.652 - 5973.858: 0.3804% ( 1) 00:08:22.398 5973.858 - 5999.065: 0.4331% ( 9) 00:08:22.398 5999.065 - 6024.271: 0.6847% ( 43) 00:08:22.398 6024.271 - 6049.477: 1.2757% ( 101) 00:08:22.398 6049.477 - 6074.683: 2.2647% ( 169) 00:08:22.398 6074.683 - 6099.889: 3.7278% ( 250) 00:08:22.398 6099.889 - 6125.095: 5.8287% ( 359) 00:08:22.398 6125.095 - 6150.302: 7.7598% ( 330) 00:08:22.398 6150.302 - 6175.508: 9.6910% ( 330) 00:08:22.398 6175.508 - 6200.714: 11.6456% ( 334) 00:08:22.398 6200.714 - 6225.920: 13.5358% ( 323) 00:08:22.398 6225.920 - 6251.126: 15.3851% ( 316) 00:08:22.398 6251.126 - 6276.332: 17.2811% ( 324) 00:08:22.398 6276.332 - 6301.538: 18.9841% ( 291) 00:08:22.398 6301.538 - 6326.745: 20.8216% ( 314) 00:08:22.398 6326.745 - 6351.951: 22.6943% ( 320) 00:08:22.398 6351.951 - 6377.157: 24.5611% ( 319) 00:08:22.398 6377.157 - 6402.363: 26.4279% ( 319) 00:08:22.398 6402.363 - 6427.569: 28.4059% ( 338) 00:08:22.398 6427.569 - 6452.775: 30.2142% ( 309) 00:08:22.398 6452.775 - 6503.188: 33.9888% ( 645) 00:08:22.398 6503.188 - 6553.600: 37.8219% ( 655) 00:08:22.398 6553.600 - 6604.012: 41.5906% ( 644) 00:08:22.398 6604.012 - 6654.425: 45.3827% ( 648) 00:08:22.398 6654.425 - 6704.837: 49.1749% ( 648) 00:08:22.398 6704.837 - 6755.249: 52.8909% ( 635) 00:08:22.398 6755.249 - 6805.662: 56.6479% ( 642) 00:08:22.398 6805.662 - 6856.074: 60.4752% ( 654) 00:08:22.398 6856.074 - 6906.486: 64.2439% ( 644) 00:08:22.398 6906.486 - 6956.898: 67.3689% ( 534) 00:08:22.398 6956.898 - 7007.311: 69.0543% ( 288) 00:08:22.398 7007.311 - 7057.723: 69.6337% ( 99) 00:08:22.398 7057.723 - 7108.135: 69.8853% ( 43) 00:08:22.398 7108.135 - 7158.548: 70.1311% ( 42) 00:08:22.398 7158.548 - 7208.960: 70.3301% ( 34) 00:08:22.398 7208.960 - 7259.372: 70.5641% ( 40) 00:08:22.398 7259.372 - 7309.785: 70.8041% ( 41) 00:08:22.398 7309.785 - 7360.197: 70.9796% ( 30) 00:08:22.398 7360.197 - 7410.609: 71.2137% ( 40) 00:08:22.398 7410.609 - 7461.022: 71.4654% ( 43) 00:08:22.398 7461.022 - 7511.434: 71.6468% ( 31) 00:08:22.398 7511.434 - 7561.846: 71.8457% ( 34) 00:08:22.398 7561.846 - 7612.258: 72.0506% ( 35) 00:08:22.398 7612.258 - 7662.671: 72.2378% ( 32) 00:08:22.398 7662.671 - 7713.083: 72.4836% ( 42) 00:08:22.398 7713.083 - 7763.495: 72.7001% ( 37) 00:08:22.398 7763.495 - 7813.908: 72.9284% ( 39) 00:08:22.398 7813.908 - 7864.320: 73.1390% ( 36) 00:08:22.398 7864.320 - 7914.732: 73.3146% ( 30) 00:08:22.398 7914.732 - 7965.145: 73.5370% ( 38) 00:08:22.398 7965.145 - 8015.557: 73.7652% ( 39) 00:08:22.398 8015.557 - 8065.969: 73.9934% ( 39) 00:08:22.398 8065.969 - 8116.382: 74.2509% ( 44) 00:08:22.398 8116.382 - 8166.794: 74.5318% ( 48) 00:08:22.398 8166.794 - 8217.206: 74.8420% ( 53) 00:08:22.398 8217.206 - 8267.618: 75.1404% ( 51) 00:08:22.398 8267.618 - 8318.031: 75.4799% ( 58) 00:08:22.398 8318.031 - 8368.443: 75.9129% ( 74) 00:08:22.398 8368.443 - 8418.855: 76.4103% ( 85) 00:08:22.398 8418.855 - 8469.268: 76.9019% ( 84) 00:08:22.398 8469.268 - 8519.680: 77.4930% ( 101) 00:08:22.398 8519.680 - 8570.092: 78.0723% ( 99) 00:08:22.398 8570.092 - 8620.505: 78.7570% ( 117) 00:08:22.398 8620.505 - 8670.917: 79.4593% ( 120) 00:08:22.398 8670.917 - 8721.329: 80.2610% ( 137) 00:08:22.398 8721.329 - 8771.742: 81.1798% ( 157) 00:08:22.398 8771.742 - 8822.154: 82.0576% ( 150) 00:08:22.398 8822.154 - 8872.566: 82.9237% ( 148) 00:08:22.398 8872.566 - 8922.978: 83.7781% ( 146) 00:08:22.398 8922.978 - 8973.391: 84.5915% ( 139) 00:08:22.398 8973.391 - 9023.803: 85.3699% ( 133) 00:08:22.398 9023.803 - 9074.215: 86.1365% ( 131) 00:08:22.398 9074.215 - 9124.628: 86.8797% ( 127) 00:08:22.398 9124.628 - 9175.040: 87.5936% ( 122) 00:08:22.398 9175.040 - 9225.452: 88.3193% ( 124) 00:08:22.398 9225.452 - 9275.865: 89.0391% ( 123) 00:08:22.398 9275.865 - 9326.277: 89.6126% ( 98) 00:08:22.398 9326.277 - 9376.689: 90.1393% ( 90) 00:08:22.398 9376.689 - 9427.102: 90.6133% ( 81) 00:08:22.398 9427.102 - 9477.514: 91.0581% ( 76) 00:08:22.398 9477.514 - 9527.926: 91.5028% ( 76) 00:08:22.398 9527.926 - 9578.338: 91.8364% ( 57) 00:08:22.398 9578.338 - 9628.751: 92.0705% ( 40) 00:08:22.398 9628.751 - 9679.163: 92.2577% ( 32) 00:08:22.398 9679.163 - 9729.575: 92.4274% ( 29) 00:08:22.398 9729.575 - 9779.988: 92.5445% ( 20) 00:08:22.398 9779.988 - 9830.400: 92.6615% ( 20) 00:08:22.398 9830.400 - 9880.812: 92.7844% ( 21) 00:08:22.398 9880.812 - 9931.225: 92.9307% ( 25) 00:08:22.398 9931.225 - 9981.637: 93.0478% ( 20) 00:08:22.398 9981.637 - 10032.049: 93.1941% ( 25) 00:08:22.398 10032.049 - 10082.462: 93.3228% ( 22) 00:08:22.398 10082.462 - 10132.874: 93.4515% ( 22) 00:08:22.398 10132.874 - 10183.286: 93.5803% ( 22) 00:08:22.398 10183.286 - 10233.698: 93.6739% ( 16) 00:08:22.398 10233.698 - 10284.111: 93.7559% ( 14) 00:08:22.398 10284.111 - 10334.523: 93.8378% ( 14) 00:08:22.398 10334.523 - 10384.935: 93.9197% ( 14) 00:08:22.398 10384.935 - 10435.348: 93.9899% ( 12) 00:08:22.398 10435.348 - 10485.760: 94.0543% ( 11) 00:08:22.398 10485.760 - 10536.172: 94.1245% ( 12) 00:08:22.398 10536.172 - 10586.585: 94.1948% ( 12) 00:08:22.398 10586.585 - 10636.997: 94.2650% ( 12) 00:08:22.398 10636.997 - 10687.409: 94.3528% ( 15) 00:08:22.398 10687.409 - 10737.822: 94.4288% ( 13) 00:08:22.398 10737.822 - 10788.234: 94.5049% ( 13) 00:08:22.398 10788.234 - 10838.646: 94.5868% ( 14) 00:08:22.398 10838.646 - 10889.058: 94.6571% ( 12) 00:08:22.398 10889.058 - 10939.471: 94.6980% ( 7) 00:08:22.398 10939.471 - 10989.883: 94.7390% ( 7) 00:08:22.398 10989.883 - 11040.295: 94.7741% ( 6) 00:08:22.398 11040.295 - 11090.708: 94.8326% ( 10) 00:08:22.398 11090.708 - 11141.120: 94.8970% ( 11) 00:08:22.398 11141.120 - 11191.532: 94.9497% ( 9) 00:08:22.398 11191.532 - 11241.945: 95.0023% ( 9) 00:08:22.398 11241.945 - 11292.357: 95.0667% ( 11) 00:08:22.398 11292.357 - 11342.769: 95.1428% ( 13) 00:08:22.398 11342.769 - 11393.182: 95.2130% ( 12) 00:08:22.398 11393.182 - 11443.594: 95.3008% ( 15) 00:08:22.398 11443.594 - 11494.006: 95.3535% ( 9) 00:08:22.398 11494.006 - 11544.418: 95.4120% ( 10) 00:08:22.398 11544.418 - 11594.831: 95.4471% ( 6) 00:08:22.398 11594.831 - 11645.243: 95.4822% ( 6) 00:08:22.398 11645.243 - 11695.655: 95.5173% ( 6) 00:08:22.398 11695.655 - 11746.068: 95.5524% ( 6) 00:08:22.398 11746.068 - 11796.480: 95.5875% ( 6) 00:08:22.398 11796.480 - 11846.892: 95.6227% ( 6) 00:08:22.398 11846.892 - 11897.305: 95.6636% ( 7) 00:08:22.398 11897.305 - 11947.717: 95.6987% ( 6) 00:08:22.398 11947.717 - 11998.129: 95.7338% ( 6) 00:08:22.398 11998.129 - 12048.542: 95.7690% ( 6) 00:08:22.398 12048.542 - 12098.954: 95.8158% ( 8) 00:08:22.398 12098.954 - 12149.366: 95.8509% ( 6) 00:08:22.398 12149.366 - 12199.778: 95.8860% ( 6) 00:08:22.398 12199.778 - 12250.191: 95.9270% ( 7) 00:08:22.398 12250.191 - 12300.603: 95.9855% ( 10) 00:08:22.398 12300.603 - 12351.015: 96.0440% ( 10) 00:08:22.398 12351.015 - 12401.428: 96.0908% ( 8) 00:08:22.398 12401.428 - 12451.840: 96.1552% ( 11) 00:08:22.398 12451.840 - 12502.252: 96.2079% ( 9) 00:08:22.398 12502.252 - 12552.665: 96.2839% ( 13) 00:08:22.398 12552.665 - 12603.077: 96.3717% ( 15) 00:08:22.398 12603.077 - 12653.489: 96.4829% ( 19) 00:08:22.398 12653.489 - 12703.902: 96.6409% ( 27) 00:08:22.398 12703.902 - 12754.314: 96.7872% ( 25) 00:08:22.398 12754.314 - 12804.726: 96.9569% ( 29) 00:08:22.398 12804.726 - 12855.138: 97.1032% ( 25) 00:08:22.398 12855.138 - 12905.551: 97.2612% ( 27) 00:08:22.398 12905.551 - 13006.375: 97.5772% ( 54) 00:08:22.398 13006.375 - 13107.200: 97.8874% ( 53) 00:08:22.398 13107.200 - 13208.025: 98.1859% ( 51) 00:08:22.398 13208.025 - 13308.849: 98.4726% ( 49) 00:08:22.398 13308.849 - 13409.674: 98.7008% ( 39) 00:08:22.398 13409.674 - 13510.498: 98.9174% ( 37) 00:08:22.398 13510.498 - 13611.323: 99.0637% ( 25) 00:08:22.398 13611.323 - 13712.148: 99.1515% ( 15) 00:08:22.398 13712.148 - 13812.972: 99.1866% ( 6) 00:08:22.398 13812.972 - 13913.797: 99.2275% ( 7) 00:08:22.398 13913.797 - 14014.622: 99.2509% ( 4) 00:08:22.398 23391.311 - 23492.135: 99.2802% ( 5) 00:08:22.398 23492.135 - 23592.960: 99.3153% ( 6) 00:08:22.398 23592.960 - 23693.785: 99.3504% ( 6) 00:08:22.398 23693.785 - 23794.609: 99.3855% ( 6) 00:08:22.398 23794.609 - 23895.434: 99.4206% ( 6) 00:08:22.398 23895.434 - 23996.258: 99.4558% ( 6) 00:08:22.398 23996.258 - 24097.083: 99.4909% ( 6) 00:08:22.398 24097.083 - 24197.908: 99.5260% ( 6) 00:08:22.398 24197.908 - 24298.732: 99.5611% ( 6) 00:08:22.398 24298.732 - 24399.557: 99.5962% ( 6) 00:08:22.398 24399.557 - 24500.382: 99.6255% ( 5) 00:08:22.398 28432.542 - 28634.191: 99.6723% ( 8) 00:08:22.398 28634.191 - 28835.840: 99.7425% ( 12) 00:08:22.398 28835.840 - 29037.489: 99.8127% ( 12) 00:08:22.398 29037.489 - 29239.138: 99.8830% ( 12) 00:08:22.398 29239.138 - 29440.788: 99.9532% ( 12) 00:08:22.398 29440.788 - 29642.437: 100.0000% ( 8) 00:08:22.398 00:08:22.398 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.398 ============================================================================== 00:08:22.398 Range in us Cumulative IO count 00:08:22.398 4083.397 - 4108.603: 0.0527% ( 9) 00:08:22.398 4108.603 - 4133.809: 0.0644% ( 2) 00:08:22.398 4133.809 - 4159.015: 0.0761% ( 2) 00:08:22.398 4184.222 - 4209.428: 0.0878% ( 2) 00:08:22.398 4209.428 - 4234.634: 0.0995% ( 2) 00:08:22.398 4234.634 - 4259.840: 0.1112% ( 2) 00:08:22.398 4259.840 - 4285.046: 0.1229% ( 2) 00:08:22.398 4285.046 - 4310.252: 0.1346% ( 2) 00:08:22.398 4310.252 - 4335.458: 0.1463% ( 2) 00:08:22.398 4335.458 - 4360.665: 0.1639% ( 3) 00:08:22.398 4360.665 - 4385.871: 0.1756% ( 2) 00:08:22.398 4385.871 - 4411.077: 0.1873% ( 2) 00:08:22.398 4411.077 - 4436.283: 0.1990% ( 2) 00:08:22.398 4436.283 - 4461.489: 0.2107% ( 2) 00:08:22.398 4461.489 - 4486.695: 0.2224% ( 2) 00:08:22.398 4486.695 - 4511.902: 0.2399% ( 3) 00:08:22.398 4511.902 - 4537.108: 0.2516% ( 2) 00:08:22.398 4537.108 - 4562.314: 0.2633% ( 2) 00:08:22.398 4562.314 - 4587.520: 0.2750% ( 2) 00:08:22.398 4587.520 - 4612.726: 0.2868% ( 2) 00:08:22.398 4612.726 - 4637.932: 0.3043% ( 3) 00:08:22.398 4637.932 - 4663.138: 0.3160% ( 2) 00:08:22.398 4663.138 - 4688.345: 0.3277% ( 2) 00:08:22.398 4688.345 - 4713.551: 0.3394% ( 2) 00:08:22.398 4713.551 - 4738.757: 0.3511% ( 2) 00:08:22.398 4738.757 - 4763.963: 0.3687% ( 3) 00:08:22.398 4763.963 - 4789.169: 0.3745% ( 1) 00:08:22.398 5494.942 - 5520.148: 0.3979% ( 4) 00:08:22.398 5520.148 - 5545.354: 0.4038% ( 1) 00:08:22.398 5545.354 - 5570.560: 0.4155% ( 2) 00:08:22.398 5570.560 - 5595.766: 0.4272% ( 2) 00:08:22.398 5595.766 - 5620.972: 0.4448% ( 3) 00:08:22.398 5620.972 - 5646.178: 0.4565% ( 2) 00:08:22.398 5646.178 - 5671.385: 0.4682% ( 2) 00:08:22.398 5671.385 - 5696.591: 0.4799% ( 2) 00:08:22.399 5696.591 - 5721.797: 0.4916% ( 2) 00:08:22.399 5721.797 - 5747.003: 0.5033% ( 2) 00:08:22.399 5747.003 - 5772.209: 0.5150% ( 2) 00:08:22.399 5772.209 - 5797.415: 0.5325% ( 3) 00:08:22.399 5797.415 - 5822.622: 0.5442% ( 2) 00:08:22.399 5822.622 - 5847.828: 0.5559% ( 2) 00:08:22.399 5847.828 - 5873.034: 0.5676% ( 2) 00:08:22.399 5873.034 - 5898.240: 0.5794% ( 2) 00:08:22.399 5898.240 - 5923.446: 0.5969% ( 3) 00:08:22.399 5923.446 - 5948.652: 0.6086% ( 2) 00:08:22.399 5948.652 - 5973.858: 0.6203% ( 2) 00:08:22.399 5973.858 - 5999.065: 0.6320% ( 2) 00:08:22.399 5999.065 - 6024.271: 0.8427% ( 36) 00:08:22.399 6024.271 - 6049.477: 1.2933% ( 77) 00:08:22.399 6049.477 - 6074.683: 2.2413% ( 162) 00:08:22.399 6074.683 - 6099.889: 3.9092% ( 285) 00:08:22.399 6099.889 - 6125.095: 6.1564% ( 384) 00:08:22.399 6125.095 - 6150.302: 8.4387% ( 390) 00:08:22.399 6150.302 - 6175.508: 10.5981% ( 369) 00:08:22.399 6175.508 - 6200.714: 12.3654% ( 302) 00:08:22.399 6200.714 - 6225.920: 14.0566% ( 289) 00:08:22.399 6225.920 - 6251.126: 15.8006% ( 298) 00:08:22.399 6251.126 - 6276.332: 17.4333% ( 279) 00:08:22.399 6276.332 - 6301.538: 19.1713% ( 297) 00:08:22.399 6301.538 - 6326.745: 21.0206% ( 316) 00:08:22.399 6326.745 - 6351.951: 22.9108% ( 323) 00:08:22.399 6351.951 - 6377.157: 24.7367% ( 312) 00:08:22.399 6377.157 - 6402.363: 26.6620% ( 329) 00:08:22.399 6402.363 - 6427.569: 28.5990% ( 331) 00:08:22.399 6427.569 - 6452.775: 30.4541% ( 317) 00:08:22.399 6452.775 - 6503.188: 34.2287% ( 645) 00:08:22.399 6503.188 - 6553.600: 37.9974% ( 644) 00:08:22.399 6553.600 - 6604.012: 41.7193% ( 636) 00:08:22.399 6604.012 - 6654.425: 45.3886% ( 627) 00:08:22.399 6654.425 - 6704.837: 49.1105% ( 636) 00:08:22.399 6704.837 - 6755.249: 52.8617% ( 641) 00:08:22.399 6755.249 - 6805.662: 56.5719% ( 634) 00:08:22.399 6805.662 - 6856.074: 60.2704% ( 632) 00:08:22.399 6856.074 - 6906.486: 64.0801% ( 651) 00:08:22.399 6906.486 - 6956.898: 67.2577% ( 543) 00:08:22.399 6956.898 - 7007.311: 68.9373% ( 287) 00:08:22.399 7007.311 - 7057.723: 69.4054% ( 80) 00:08:22.399 7057.723 - 7108.135: 69.6688% ( 45) 00:08:22.399 7108.135 - 7158.548: 69.9204% ( 43) 00:08:22.399 7158.548 - 7208.960: 70.1486% ( 39) 00:08:22.399 7208.960 - 7259.372: 70.3593% ( 36) 00:08:22.399 7259.372 - 7309.785: 70.5700% ( 36) 00:08:22.399 7309.785 - 7360.197: 70.7982% ( 39) 00:08:22.399 7360.197 - 7410.609: 71.0323% ( 40) 00:08:22.399 7410.609 - 7461.022: 71.2371% ( 35) 00:08:22.399 7461.022 - 7511.434: 71.4244% ( 32) 00:08:22.399 7511.434 - 7561.846: 71.6234% ( 34) 00:08:22.399 7561.846 - 7612.258: 71.8399% ( 37) 00:08:22.399 7612.258 - 7662.671: 72.0564% ( 37) 00:08:22.399 7662.671 - 7713.083: 72.3256% ( 46) 00:08:22.399 7713.083 - 7763.495: 72.6241% ( 51) 00:08:22.399 7763.495 - 7813.908: 72.9050% ( 48) 00:08:22.399 7813.908 - 7864.320: 73.1683% ( 45) 00:08:22.399 7864.320 - 7914.732: 73.4434% ( 47) 00:08:22.399 7914.732 - 7965.145: 73.7243% ( 48) 00:08:22.399 7965.145 - 8015.557: 74.0286% ( 52) 00:08:22.399 8015.557 - 8065.969: 74.3329% ( 52) 00:08:22.399 8065.969 - 8116.382: 74.6138% ( 48) 00:08:22.399 8116.382 - 8166.794: 74.9239% ( 53) 00:08:22.399 8166.794 - 8217.206: 75.2633% ( 58) 00:08:22.399 8217.206 - 8267.618: 75.6554% ( 67) 00:08:22.399 8267.618 - 8318.031: 76.0592% ( 69) 00:08:22.399 8318.031 - 8368.443: 76.4747% ( 71) 00:08:22.399 8368.443 - 8418.855: 76.9838% ( 87) 00:08:22.399 8418.855 - 8469.268: 77.5691% ( 100) 00:08:22.399 8469.268 - 8519.680: 78.1952% ( 107) 00:08:22.399 8519.680 - 8570.092: 78.8507% ( 112) 00:08:22.399 8570.092 - 8620.505: 79.5353% ( 117) 00:08:22.399 8620.505 - 8670.917: 80.2961% ( 130) 00:08:22.399 8670.917 - 8721.329: 81.1330% ( 143) 00:08:22.399 8721.329 - 8771.742: 81.9581% ( 141) 00:08:22.399 8771.742 - 8822.154: 82.7949% ( 143) 00:08:22.399 8822.154 - 8872.566: 83.5908% ( 136) 00:08:22.399 8872.566 - 8922.978: 84.4043% ( 139) 00:08:22.399 8922.978 - 8973.391: 85.1943% ( 135) 00:08:22.399 8973.391 - 9023.803: 85.9551% ( 130) 00:08:22.399 9023.803 - 9074.215: 86.6866% ( 125) 00:08:22.399 9074.215 - 9124.628: 87.3947% ( 121) 00:08:22.399 9124.628 - 9175.040: 88.0969% ( 120) 00:08:22.399 9175.040 - 9225.452: 88.7231% ( 107) 00:08:22.399 9225.452 - 9275.865: 89.3727% ( 111) 00:08:22.399 9275.865 - 9326.277: 89.8993% ( 90) 00:08:22.399 9326.277 - 9376.689: 90.3792% ( 82) 00:08:22.399 9376.689 - 9427.102: 90.7772% ( 68) 00:08:22.399 9427.102 - 9477.514: 91.1341% ( 61) 00:08:22.399 9477.514 - 9527.926: 91.4033% ( 46) 00:08:22.399 9527.926 - 9578.338: 91.6257% ( 38) 00:08:22.399 9578.338 - 9628.751: 91.7896% ( 28) 00:08:22.399 9628.751 - 9679.163: 91.9125% ( 21) 00:08:22.399 9679.163 - 9729.575: 92.0412% ( 22) 00:08:22.399 9729.575 - 9779.988: 92.1348% ( 16) 00:08:22.399 9779.988 - 9830.400: 92.2460% ( 19) 00:08:22.399 9830.400 - 9880.812: 92.3631% ( 20) 00:08:22.399 9880.812 - 9931.225: 92.4918% ( 22) 00:08:22.399 9931.225 - 9981.637: 92.5854% ( 16) 00:08:22.399 9981.637 - 10032.049: 92.6908% ( 18) 00:08:22.399 10032.049 - 10082.462: 92.8254% ( 23) 00:08:22.399 10082.462 - 10132.874: 92.9541% ( 22) 00:08:22.399 10132.874 - 10183.286: 93.0770% ( 21) 00:08:22.399 10183.286 - 10233.698: 93.1941% ( 20) 00:08:22.399 10233.698 - 10284.111: 93.3228% ( 22) 00:08:22.399 10284.111 - 10334.523: 93.4632% ( 24) 00:08:22.399 10334.523 - 10384.935: 93.6096% ( 25) 00:08:22.399 10384.935 - 10435.348: 93.7617% ( 26) 00:08:22.399 10435.348 - 10485.760: 93.9022% ( 24) 00:08:22.399 10485.760 - 10536.172: 94.0309% ( 22) 00:08:22.399 10536.172 - 10586.585: 94.2065% ( 30) 00:08:22.399 10586.585 - 10636.997: 94.3645% ( 27) 00:08:22.399 10636.997 - 10687.409: 94.4932% ( 22) 00:08:22.399 10687.409 - 10737.822: 94.5927% ( 17) 00:08:22.399 10737.822 - 10788.234: 94.6980% ( 18) 00:08:22.399 10788.234 - 10838.646: 94.7917% ( 16) 00:08:22.399 10838.646 - 10889.058: 94.8853% ( 16) 00:08:22.399 10889.058 - 10939.471: 94.9789% ( 16) 00:08:22.399 10939.471 - 10989.883: 95.0843% ( 18) 00:08:22.399 10989.883 - 11040.295: 95.1721% ( 15) 00:08:22.399 11040.295 - 11090.708: 95.2481% ( 13) 00:08:22.399 11090.708 - 11141.120: 95.3242% ( 13) 00:08:22.399 11141.120 - 11191.532: 95.4003% ( 13) 00:08:22.399 11191.532 - 11241.945: 95.4764% ( 13) 00:08:22.399 11241.945 - 11292.357: 95.5524% ( 13) 00:08:22.399 11292.357 - 11342.769: 95.6051% ( 9) 00:08:22.399 11342.769 - 11393.182: 95.6636% ( 10) 00:08:22.399 11393.182 - 11443.594: 95.6987% ( 6) 00:08:22.399 11443.594 - 11494.006: 95.7163% ( 3) 00:08:22.399 11494.006 - 11544.418: 95.7397% ( 4) 00:08:22.399 11544.418 - 11594.831: 95.7573% ( 3) 00:08:22.399 11594.831 - 11645.243: 95.7748% ( 3) 00:08:22.399 11645.243 - 11695.655: 95.7982% ( 4) 00:08:22.399 11695.655 - 11746.068: 95.8158% ( 3) 00:08:22.399 11746.068 - 11796.480: 95.8333% ( 3) 00:08:22.399 11796.480 - 11846.892: 95.8626% ( 5) 00:08:22.399 11846.892 - 11897.305: 95.8977% ( 6) 00:08:22.399 11897.305 - 11947.717: 95.9211% ( 4) 00:08:22.399 11947.717 - 11998.129: 95.9445% ( 4) 00:08:22.399 11998.129 - 12048.542: 95.9679% ( 4) 00:08:22.399 12048.542 - 12098.954: 95.9855% ( 3) 00:08:22.399 12098.954 - 12149.366: 96.0030% ( 3) 00:08:22.399 12149.366 - 12199.778: 96.0206% ( 3) 00:08:22.399 12199.778 - 12250.191: 96.0440% ( 4) 00:08:22.399 12250.191 - 12300.603: 96.0616% ( 3) 00:08:22.399 12300.603 - 12351.015: 96.0791% ( 3) 00:08:22.399 12351.015 - 12401.428: 96.1084% ( 5) 00:08:22.399 12401.428 - 12451.840: 96.1435% ( 6) 00:08:22.399 12451.840 - 12502.252: 96.2254% ( 14) 00:08:22.399 12502.252 - 12552.665: 96.3015% ( 13) 00:08:22.399 12552.665 - 12603.077: 96.3776% ( 13) 00:08:22.399 12603.077 - 12653.489: 96.4654% ( 15) 00:08:22.399 12653.489 - 12703.902: 96.5590% ( 16) 00:08:22.399 12703.902 - 12754.314: 96.6760% ( 20) 00:08:22.399 12754.314 - 12804.726: 96.7931% ( 20) 00:08:22.399 12804.726 - 12855.138: 96.9160% ( 21) 00:08:22.399 12855.138 - 12905.551: 97.0154% ( 17) 00:08:22.399 12905.551 - 13006.375: 97.2788% ( 45) 00:08:22.399 13006.375 - 13107.200: 97.5421% ( 45) 00:08:22.399 13107.200 - 13208.025: 97.8523% ( 53) 00:08:22.399 13208.025 - 13308.849: 98.1566% ( 52) 00:08:22.399 13308.849 - 13409.674: 98.4492% ( 50) 00:08:22.399 13409.674 - 13510.498: 98.6540% ( 35) 00:08:22.399 13510.498 - 13611.323: 98.8413% ( 32) 00:08:22.399 13611.323 - 13712.148: 99.0051% ( 28) 00:08:22.399 13712.148 - 13812.972: 99.1222% ( 20) 00:08:22.399 13812.972 - 13913.797: 99.2041% ( 14) 00:08:22.399 13913.797 - 14014.622: 99.2451% ( 7) 00:08:22.399 14014.622 - 14115.446: 99.2509% ( 1) 00:08:22.399 23290.486 - 23391.311: 99.2743% ( 4) 00:08:22.399 23391.311 - 23492.135: 99.3095% ( 6) 00:08:22.399 23492.135 - 23592.960: 99.3446% ( 6) 00:08:22.399 23592.960 - 23693.785: 99.3738% ( 5) 00:08:22.399 23693.785 - 23794.609: 99.4089% ( 6) 00:08:22.399 23794.609 - 23895.434: 99.4441% ( 6) 00:08:22.399 23895.434 - 23996.258: 99.4792% ( 6) 00:08:22.399 23996.258 - 24097.083: 99.5143% ( 6) 00:08:22.399 24097.083 - 24197.908: 99.5494% ( 6) 00:08:22.399 24197.908 - 24298.732: 99.5845% ( 6) 00:08:22.399 24298.732 - 24399.557: 99.6196% ( 6) 00:08:22.399 24399.557 - 24500.382: 99.6255% ( 1) 00:08:22.399 28835.840 - 29037.489: 99.6489% ( 4) 00:08:22.399 29037.489 - 29239.138: 99.7132% ( 11) 00:08:22.399 29239.138 - 29440.788: 99.7835% ( 12) 00:08:22.399 29440.788 - 29642.437: 99.8478% ( 11) 00:08:22.399 29642.437 - 29844.086: 99.9181% ( 12) 00:08:22.399 29844.086 - 30045.735: 99.9883% ( 12) 00:08:22.399 30045.735 - 30247.385: 100.0000% ( 2) 00:08:22.399 00:08:22.399 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.399 ============================================================================== 00:08:22.399 Range in us Cumulative IO count 00:08:22.399 3831.335 - 3856.542: 0.0117% ( 2) 00:08:22.399 3856.542 - 3881.748: 0.0176% ( 1) 00:08:22.399 3881.748 - 3906.954: 0.0293% ( 2) 00:08:22.399 3906.954 - 3932.160: 0.0468% ( 3) 00:08:22.399 3932.160 - 3957.366: 0.0585% ( 2) 00:08:22.399 3957.366 - 3982.572: 0.0702% ( 2) 00:08:22.399 3982.572 - 4007.778: 0.0819% ( 2) 00:08:22.399 4007.778 - 4032.985: 0.0995% ( 3) 00:08:22.399 4032.985 - 4058.191: 0.1112% ( 2) 00:08:22.399 4058.191 - 4083.397: 0.1229% ( 2) 00:08:22.399 4083.397 - 4108.603: 0.1404% ( 3) 00:08:22.399 4108.603 - 4133.809: 0.1522% ( 2) 00:08:22.399 4133.809 - 4159.015: 0.1639% ( 2) 00:08:22.399 4159.015 - 4184.222: 0.1756% ( 2) 00:08:22.399 4184.222 - 4209.428: 0.1873% ( 2) 00:08:22.399 4209.428 - 4234.634: 0.2048% ( 3) 00:08:22.399 4234.634 - 4259.840: 0.2165% ( 2) 00:08:22.399 4259.840 - 4285.046: 0.2224% ( 1) 00:08:22.399 4285.046 - 4310.252: 0.2399% ( 3) 00:08:22.399 4310.252 - 4335.458: 0.2516% ( 2) 00:08:22.399 4335.458 - 4360.665: 0.2633% ( 2) 00:08:22.399 4360.665 - 4385.871: 0.2750% ( 2) 00:08:22.399 4385.871 - 4411.077: 0.2926% ( 3) 00:08:22.399 4411.077 - 4436.283: 0.3043% ( 2) 00:08:22.399 4436.283 - 4461.489: 0.3160% ( 2) 00:08:22.399 4461.489 - 4486.695: 0.3277% ( 2) 00:08:22.399 4486.695 - 4511.902: 0.3394% ( 2) 00:08:22.399 4511.902 - 4537.108: 0.3511% ( 2) 00:08:22.399 4537.108 - 4562.314: 0.3687% ( 3) 00:08:22.399 4562.314 - 4587.520: 0.3745% ( 1) 00:08:22.399 5293.292 - 5318.498: 0.3862% ( 2) 00:08:22.399 5318.498 - 5343.705: 0.3979% ( 2) 00:08:22.399 5343.705 - 5368.911: 0.4096% ( 2) 00:08:22.399 5368.911 - 5394.117: 0.4213% ( 2) 00:08:22.399 5394.117 - 5419.323: 0.4331% ( 2) 00:08:22.399 5419.323 - 5444.529: 0.4506% ( 3) 00:08:22.399 5444.529 - 5469.735: 0.4623% ( 2) 00:08:22.399 5469.735 - 5494.942: 0.4799% ( 3) 00:08:22.399 5494.942 - 5520.148: 0.4916% ( 2) 00:08:22.399 5520.148 - 5545.354: 0.5033% ( 2) 00:08:22.399 5545.354 - 5570.560: 0.5150% ( 2) 00:08:22.399 5570.560 - 5595.766: 0.5267% ( 2) 00:08:22.399 5595.766 - 5620.972: 0.5442% ( 3) 00:08:22.399 5620.972 - 5646.178: 0.5559% ( 2) 00:08:22.399 5646.178 - 5671.385: 0.5676% ( 2) 00:08:22.399 5671.385 - 5696.591: 0.5794% ( 2) 00:08:22.399 5696.591 - 5721.797: 0.5852% ( 1) 00:08:22.399 5721.797 - 5747.003: 0.5969% ( 2) 00:08:22.399 5747.003 - 5772.209: 0.6145% ( 3) 00:08:22.399 5772.209 - 5797.415: 0.6262% ( 2) 00:08:22.399 5797.415 - 5822.622: 0.6379% ( 2) 00:08:22.399 5822.622 - 5847.828: 0.6496% ( 2) 00:08:22.399 5847.828 - 5873.034: 0.6613% ( 2) 00:08:22.399 5873.034 - 5898.240: 0.6730% ( 2) 00:08:22.399 5898.240 - 5923.446: 0.6847% ( 2) 00:08:22.399 5923.446 - 5948.652: 0.6964% ( 2) 00:08:22.399 5948.652 - 5973.858: 0.7081% ( 2) 00:08:22.399 5973.858 - 5999.065: 0.7432% ( 6) 00:08:22.399 5999.065 - 6024.271: 0.9949% ( 43) 00:08:22.399 6024.271 - 6049.477: 1.5215% ( 90) 00:08:22.400 6049.477 - 6074.683: 2.6744% ( 197) 00:08:22.400 6074.683 - 6099.889: 4.2720% ( 273) 00:08:22.400 6099.889 - 6125.095: 6.2793% ( 343) 00:08:22.400 6125.095 - 6150.302: 8.2280% ( 333) 00:08:22.400 6150.302 - 6175.508: 10.2060% ( 338) 00:08:22.400 6175.508 - 6200.714: 11.9792% ( 303) 00:08:22.400 6200.714 - 6225.920: 13.6880% ( 292) 00:08:22.400 6225.920 - 6251.126: 15.4202% ( 296) 00:08:22.400 6251.126 - 6276.332: 17.3338% ( 327) 00:08:22.400 6276.332 - 6301.538: 19.1831% ( 316) 00:08:22.400 6301.538 - 6326.745: 21.0030% ( 311) 00:08:22.400 6326.745 - 6351.951: 22.8640% ( 318) 00:08:22.400 6351.951 - 6377.157: 24.7718% ( 326) 00:08:22.400 6377.157 - 6402.363: 26.7088% ( 331) 00:08:22.400 6402.363 - 6427.569: 28.5346% ( 312) 00:08:22.400 6427.569 - 6452.775: 30.4541% ( 328) 00:08:22.400 6452.775 - 6503.188: 34.2346% ( 646) 00:08:22.400 6503.188 - 6553.600: 38.0676% ( 655) 00:08:22.400 6553.600 - 6604.012: 41.8013% ( 638) 00:08:22.400 6604.012 - 6654.425: 45.6168% ( 652) 00:08:22.400 6654.425 - 6704.837: 49.2919% ( 628) 00:08:22.400 6704.837 - 6755.249: 53.0548% ( 643) 00:08:22.400 6755.249 - 6805.662: 56.7357% ( 629) 00:08:22.400 6805.662 - 6856.074: 60.5922% ( 659) 00:08:22.400 6856.074 - 6906.486: 64.3610% ( 644) 00:08:22.400 6906.486 - 6956.898: 67.4625% ( 530) 00:08:22.400 6956.898 - 7007.311: 69.1187% ( 283) 00:08:22.400 7007.311 - 7057.723: 69.6571% ( 92) 00:08:22.400 7057.723 - 7108.135: 69.9965% ( 58) 00:08:22.400 7108.135 - 7158.548: 70.2481% ( 43) 00:08:22.400 7158.548 - 7208.960: 70.5115% ( 45) 00:08:22.400 7208.960 - 7259.372: 70.7104% ( 34) 00:08:22.400 7259.372 - 7309.785: 70.9211% ( 36) 00:08:22.400 7309.785 - 7360.197: 71.1025% ( 31) 00:08:22.400 7360.197 - 7410.609: 71.2664% ( 28) 00:08:22.400 7410.609 - 7461.022: 71.4244% ( 27) 00:08:22.400 7461.022 - 7511.434: 71.6000% ( 30) 00:08:22.400 7511.434 - 7561.846: 71.7638% ( 28) 00:08:22.400 7561.846 - 7612.258: 71.9745% ( 36) 00:08:22.400 7612.258 - 7662.671: 72.1852% ( 36) 00:08:22.400 7662.671 - 7713.083: 72.3958% ( 36) 00:08:22.400 7713.083 - 7763.495: 72.5948% ( 34) 00:08:22.400 7763.495 - 7813.908: 72.7938% ( 34) 00:08:22.400 7813.908 - 7864.320: 72.9752% ( 31) 00:08:22.400 7864.320 - 7914.732: 73.1800% ( 35) 00:08:22.400 7914.732 - 7965.145: 73.3731% ( 33) 00:08:22.400 7965.145 - 8015.557: 73.6189% ( 42) 00:08:22.400 8015.557 - 8065.969: 73.9057% ( 49) 00:08:22.400 8065.969 - 8116.382: 74.2275% ( 55) 00:08:22.400 8116.382 - 8166.794: 74.6079% ( 65) 00:08:22.400 8166.794 - 8217.206: 75.0585% ( 77) 00:08:22.400 8217.206 - 8267.618: 75.5208% ( 79) 00:08:22.400 8267.618 - 8318.031: 75.9656% ( 76) 00:08:22.400 8318.031 - 8368.443: 76.4630% ( 85) 00:08:22.400 8368.443 - 8418.855: 76.9487% ( 83) 00:08:22.400 8418.855 - 8469.268: 77.5105% ( 96) 00:08:22.400 8469.268 - 8519.680: 78.1309% ( 106) 00:08:22.400 8519.680 - 8570.092: 78.8390% ( 121) 00:08:22.400 8570.092 - 8620.505: 79.5588% ( 123) 00:08:22.400 8620.505 - 8670.917: 80.3722% ( 139) 00:08:22.400 8670.917 - 8721.329: 81.2324% ( 147) 00:08:22.400 8721.329 - 8771.742: 82.1337% ( 154) 00:08:22.400 8771.742 - 8822.154: 83.0173% ( 151) 00:08:22.400 8822.154 - 8872.566: 83.9010% ( 151) 00:08:22.400 8872.566 - 8922.978: 84.7203% ( 140) 00:08:22.400 8922.978 - 8973.391: 85.4927% ( 132) 00:08:22.400 8973.391 - 9023.803: 86.2828% ( 135) 00:08:22.400 9023.803 - 9074.215: 86.9733% ( 118) 00:08:22.400 9074.215 - 9124.628: 87.6229% ( 111) 00:08:22.400 9124.628 - 9175.040: 88.2022% ( 99) 00:08:22.400 9175.040 - 9225.452: 88.8109% ( 104) 00:08:22.400 9225.452 - 9275.865: 89.3668% ( 95) 00:08:22.400 9275.865 - 9326.277: 89.9462% ( 99) 00:08:22.400 9326.277 - 9376.689: 90.4436% ( 85) 00:08:22.400 9376.689 - 9427.102: 90.8825% ( 75) 00:08:22.400 9427.102 - 9477.514: 91.2570% ( 64) 00:08:22.400 9477.514 - 9527.926: 91.5613% ( 52) 00:08:22.400 9527.926 - 9578.338: 91.8247% ( 45) 00:08:22.400 9578.338 - 9628.751: 92.0178% ( 33) 00:08:22.400 9628.751 - 9679.163: 92.1407% ( 21) 00:08:22.400 9679.163 - 9729.575: 92.2519% ( 19) 00:08:22.400 9729.575 - 9779.988: 92.3514% ( 17) 00:08:22.400 9779.988 - 9830.400: 92.4567% ( 18) 00:08:22.400 9830.400 - 9880.812: 92.5679% ( 19) 00:08:22.400 9880.812 - 9931.225: 92.6732% ( 18) 00:08:22.400 9931.225 - 9981.637: 92.7786% ( 18) 00:08:22.400 9981.637 - 10032.049: 92.8897% ( 19) 00:08:22.400 10032.049 - 10082.462: 93.0126% ( 21) 00:08:22.400 10082.462 - 10132.874: 93.1472% ( 23) 00:08:22.400 10132.874 - 10183.286: 93.2818% ( 23) 00:08:22.400 10183.286 - 10233.698: 93.4281% ( 25) 00:08:22.400 10233.698 - 10284.111: 93.5803% ( 26) 00:08:22.400 10284.111 - 10334.523: 93.7149% ( 23) 00:08:22.400 10334.523 - 10384.935: 93.8553% ( 24) 00:08:22.400 10384.935 - 10435.348: 93.9958% ( 24) 00:08:22.400 10435.348 - 10485.760: 94.1304% ( 23) 00:08:22.400 10485.760 - 10536.172: 94.2533% ( 21) 00:08:22.400 10536.172 - 10586.585: 94.3645% ( 19) 00:08:22.400 10586.585 - 10636.997: 94.4815% ( 20) 00:08:22.400 10636.997 - 10687.409: 94.5927% ( 19) 00:08:22.400 10687.409 - 10737.822: 94.6980% ( 18) 00:08:22.400 10737.822 - 10788.234: 94.7858% ( 15) 00:08:22.400 10788.234 - 10838.646: 94.8853% ( 17) 00:08:22.400 10838.646 - 10889.058: 94.9672% ( 14) 00:08:22.400 10889.058 - 10939.471: 95.0433% ( 13) 00:08:22.400 10939.471 - 10989.883: 95.0784% ( 6) 00:08:22.400 10989.883 - 11040.295: 95.1077% ( 5) 00:08:22.400 11040.295 - 11090.708: 95.1428% ( 6) 00:08:22.400 11090.708 - 11141.120: 95.1721% ( 5) 00:08:22.400 11141.120 - 11191.532: 95.1896% ( 3) 00:08:22.400 11191.532 - 11241.945: 95.2130% ( 4) 00:08:22.400 11241.945 - 11292.357: 95.2540% ( 7) 00:08:22.400 11292.357 - 11342.769: 95.2891% ( 6) 00:08:22.400 11342.769 - 11393.182: 95.3359% ( 8) 00:08:22.400 11393.182 - 11443.594: 95.3769% ( 7) 00:08:22.400 11443.594 - 11494.006: 95.4120% ( 6) 00:08:22.400 11494.006 - 11544.418: 95.4529% ( 7) 00:08:22.400 11544.418 - 11594.831: 95.4939% ( 7) 00:08:22.400 11594.831 - 11645.243: 95.5290% ( 6) 00:08:22.400 11645.243 - 11695.655: 95.5700% ( 7) 00:08:22.400 11695.655 - 11746.068: 95.6051% ( 6) 00:08:22.400 11746.068 - 11796.480: 95.6461% ( 7) 00:08:22.400 11796.480 - 11846.892: 95.6812% ( 6) 00:08:22.400 11846.892 - 11897.305: 95.7221% ( 7) 00:08:22.400 11897.305 - 11947.717: 95.7631% ( 7) 00:08:22.400 11947.717 - 11998.129: 95.7982% ( 6) 00:08:22.400 11998.129 - 12048.542: 95.8275% ( 5) 00:08:22.400 12048.542 - 12098.954: 95.8450% ( 3) 00:08:22.400 12098.954 - 12149.366: 95.8684% ( 4) 00:08:22.400 12149.366 - 12199.778: 95.9036% ( 6) 00:08:22.400 12199.778 - 12250.191: 95.9211% ( 3) 00:08:22.400 12250.191 - 12300.603: 95.9387% ( 3) 00:08:22.400 12300.603 - 12351.015: 95.9621% ( 4) 00:08:22.400 12351.015 - 12401.428: 96.0030% ( 7) 00:08:22.400 12401.428 - 12451.840: 96.0382% ( 6) 00:08:22.400 12451.840 - 12502.252: 96.0908% ( 9) 00:08:22.400 12502.252 - 12552.665: 96.1669% ( 13) 00:08:22.400 12552.665 - 12603.077: 96.2430% ( 13) 00:08:22.400 12603.077 - 12653.489: 96.3308% ( 15) 00:08:22.400 12653.489 - 12703.902: 96.4302% ( 17) 00:08:22.400 12703.902 - 12754.314: 96.5356% ( 18) 00:08:22.400 12754.314 - 12804.726: 96.6877% ( 26) 00:08:22.400 12804.726 - 12855.138: 96.8223% ( 23) 00:08:22.400 12855.138 - 12905.551: 96.9979% ( 30) 00:08:22.400 12905.551 - 13006.375: 97.3315% ( 57) 00:08:22.400 13006.375 - 13107.200: 97.6709% ( 58) 00:08:22.400 13107.200 - 13208.025: 97.9752% ( 52) 00:08:22.400 13208.025 - 13308.849: 98.2678% ( 50) 00:08:22.400 13308.849 - 13409.674: 98.5545% ( 49) 00:08:22.400 13409.674 - 13510.498: 98.7886% ( 40) 00:08:22.400 13510.498 - 13611.323: 98.9876% ( 34) 00:08:22.400 13611.323 - 13712.148: 99.1515% ( 28) 00:08:22.400 13712.148 - 13812.972: 99.2509% ( 17) 00:08:22.400 22685.538 - 22786.363: 99.2685% ( 3) 00:08:22.400 22786.363 - 22887.188: 99.3036% ( 6) 00:08:22.400 22887.188 - 22988.012: 99.3329% ( 5) 00:08:22.400 22988.012 - 23088.837: 99.3680% ( 6) 00:08:22.400 23088.837 - 23189.662: 99.3972% ( 5) 00:08:22.400 23189.662 - 23290.486: 99.4324% ( 6) 00:08:22.400 23290.486 - 23391.311: 99.4675% ( 6) 00:08:22.400 23391.311 - 23492.135: 99.5026% ( 6) 00:08:22.400 23492.135 - 23592.960: 99.5377% ( 6) 00:08:22.400 23592.960 - 23693.785: 99.5728% ( 6) 00:08:22.400 23693.785 - 23794.609: 99.6021% ( 5) 00:08:22.400 23794.609 - 23895.434: 99.6255% ( 4) 00:08:22.400 28634.191 - 28835.840: 99.6840% ( 10) 00:08:22.400 28835.840 - 29037.489: 99.7484% ( 11) 00:08:22.400 29037.489 - 29239.138: 99.8186% ( 12) 00:08:22.400 29239.138 - 29440.788: 99.8888% ( 12) 00:08:22.400 29440.788 - 29642.437: 99.9473% ( 10) 00:08:22.400 29642.437 - 29844.086: 100.0000% ( 9) 00:08:22.400 00:08:22.400 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.400 ============================================================================== 00:08:22.400 Range in us Cumulative IO count 00:08:22.400 3654.892 - 3680.098: 0.0234% ( 4) 00:08:22.400 3680.098 - 3705.305: 0.0293% ( 1) 00:08:22.400 3705.305 - 3730.511: 0.0410% ( 2) 00:08:22.400 3730.511 - 3755.717: 0.0527% ( 2) 00:08:22.400 3755.717 - 3780.923: 0.0702% ( 3) 00:08:22.400 3780.923 - 3806.129: 0.0761% ( 1) 00:08:22.400 3806.129 - 3831.335: 0.0878% ( 2) 00:08:22.400 3831.335 - 3856.542: 0.0995% ( 2) 00:08:22.400 3856.542 - 3881.748: 0.1170% ( 3) 00:08:22.400 3881.748 - 3906.954: 0.1287% ( 2) 00:08:22.400 3906.954 - 3932.160: 0.1404% ( 2) 00:08:22.400 3932.160 - 3957.366: 0.1522% ( 2) 00:08:22.400 3957.366 - 3982.572: 0.1639% ( 2) 00:08:22.400 3982.572 - 4007.778: 0.1814% ( 3) 00:08:22.400 4007.778 - 4032.985: 0.1931% ( 2) 00:08:22.400 4032.985 - 4058.191: 0.2048% ( 2) 00:08:22.400 4058.191 - 4083.397: 0.2165% ( 2) 00:08:22.400 4083.397 - 4108.603: 0.2282% ( 2) 00:08:22.400 4108.603 - 4133.809: 0.2458% ( 3) 00:08:22.400 4133.809 - 4159.015: 0.2575% ( 2) 00:08:22.400 4159.015 - 4184.222: 0.2692% ( 2) 00:08:22.400 4184.222 - 4209.428: 0.2809% ( 2) 00:08:22.400 4209.428 - 4234.634: 0.2926% ( 2) 00:08:22.400 4234.634 - 4259.840: 0.3102% ( 3) 00:08:22.400 4259.840 - 4285.046: 0.3219% ( 2) 00:08:22.400 4285.046 - 4310.252: 0.3336% ( 2) 00:08:22.400 4310.252 - 4335.458: 0.3394% ( 1) 00:08:22.400 4335.458 - 4360.665: 0.3570% ( 3) 00:08:22.400 4360.665 - 4385.871: 0.3687% ( 2) 00:08:22.400 4385.871 - 4411.077: 0.3745% ( 1) 00:08:22.400 5116.849 - 5142.055: 0.3804% ( 1) 00:08:22.400 5142.055 - 5167.262: 0.3979% ( 3) 00:08:22.400 5167.262 - 5192.468: 0.4038% ( 1) 00:08:22.400 5192.468 - 5217.674: 0.4213% ( 3) 00:08:22.400 5217.674 - 5242.880: 0.4331% ( 2) 00:08:22.400 5242.880 - 5268.086: 0.4506% ( 3) 00:08:22.400 5268.086 - 5293.292: 0.4623% ( 2) 00:08:22.400 5293.292 - 5318.498: 0.4740% ( 2) 00:08:22.400 5318.498 - 5343.705: 0.4857% ( 2) 00:08:22.400 5343.705 - 5368.911: 0.4974% ( 2) 00:08:22.400 5368.911 - 5394.117: 0.5091% ( 2) 00:08:22.400 5394.117 - 5419.323: 0.5208% ( 2) 00:08:22.400 5419.323 - 5444.529: 0.5325% ( 2) 00:08:22.400 5444.529 - 5469.735: 0.5501% ( 3) 00:08:22.400 5469.735 - 5494.942: 0.5618% ( 2) 00:08:22.400 5494.942 - 5520.148: 0.5735% ( 2) 00:08:22.400 5520.148 - 5545.354: 0.5852% ( 2) 00:08:22.400 5545.354 - 5570.560: 0.5969% ( 2) 00:08:22.400 5570.560 - 5595.766: 0.6086% ( 2) 00:08:22.400 5595.766 - 5620.972: 0.6262% ( 3) 00:08:22.400 5620.972 - 5646.178: 0.6379% ( 2) 00:08:22.400 5646.178 - 5671.385: 0.6496% ( 2) 00:08:22.400 5671.385 - 5696.591: 0.6613% ( 2) 00:08:22.400 5696.591 - 5721.797: 0.6730% ( 2) 00:08:22.400 5721.797 - 5747.003: 0.6905% ( 3) 00:08:22.400 5747.003 - 5772.209: 0.7022% ( 2) 00:08:22.400 5772.209 - 5797.415: 0.7140% ( 2) 00:08:22.400 5797.415 - 5822.622: 0.7257% ( 2) 00:08:22.400 5822.622 - 5847.828: 0.7374% ( 2) 00:08:22.400 5847.828 - 5873.034: 0.7491% ( 2) 00:08:22.400 5973.858 - 5999.065: 0.7608% ( 2) 00:08:22.400 5999.065 - 6024.271: 0.8251% ( 11) 00:08:22.400 6024.271 - 6049.477: 1.3226% ( 85) 00:08:22.400 6049.477 - 6074.683: 2.3116% ( 169) 00:08:22.400 6074.683 - 6099.889: 3.7629% ( 248) 00:08:22.400 6099.889 - 6125.095: 5.7701% ( 343) 00:08:22.400 6125.095 - 6150.302: 8.1578% ( 408) 00:08:22.400 6150.302 - 6175.508: 10.3757% ( 379) 00:08:22.401 6175.508 - 6200.714: 12.2893% ( 327) 00:08:22.401 6200.714 - 6225.920: 14.0274% ( 297) 00:08:22.401 6225.920 - 6251.126: 15.7654% ( 297) 00:08:22.401 6251.126 - 6276.332: 17.3689% ( 274) 00:08:22.401 6276.332 - 6301.538: 19.1421% ( 303) 00:08:22.401 6301.538 - 6326.745: 20.9504% ( 309) 00:08:22.401 6326.745 - 6351.951: 22.8523% ( 325) 00:08:22.401 6351.951 - 6377.157: 24.6723% ( 311) 00:08:22.401 6377.157 - 6402.363: 26.6269% ( 334) 00:08:22.401 6402.363 - 6427.569: 28.4937% ( 319) 00:08:22.401 6427.569 - 6452.775: 30.4834% ( 340) 00:08:22.401 6452.775 - 6503.188: 34.2580% ( 645) 00:08:22.401 6503.188 - 6553.600: 38.1086% ( 658) 00:08:22.401 6553.600 - 6604.012: 41.8890% ( 646) 00:08:22.401 6604.012 - 6654.425: 45.7221% ( 655) 00:08:22.401 6654.425 - 6704.837: 49.4031% ( 629) 00:08:22.401 6704.837 - 6755.249: 53.1250% ( 636) 00:08:22.401 6755.249 - 6805.662: 56.9054% ( 646) 00:08:22.401 6805.662 - 6856.074: 60.7327% ( 654) 00:08:22.401 6856.074 - 6906.486: 64.5599% ( 654) 00:08:22.401 6906.486 - 6956.898: 67.6849% ( 534) 00:08:22.401 6956.898 - 7007.311: 69.3762% ( 289) 00:08:22.401 7007.311 - 7057.723: 69.9321% ( 95) 00:08:22.401 7057.723 - 7108.135: 70.2715% ( 58) 00:08:22.401 7108.135 - 7158.548: 70.5115% ( 41) 00:08:22.401 7158.548 - 7208.960: 70.7280% ( 37) 00:08:22.401 7208.960 - 7259.372: 70.9270% ( 34) 00:08:22.401 7259.372 - 7309.785: 71.1025% ( 30) 00:08:22.401 7309.785 - 7360.197: 71.2722% ( 29) 00:08:22.401 7360.197 - 7410.609: 71.4595% ( 32) 00:08:22.401 7410.609 - 7461.022: 71.6175% ( 27) 00:08:22.401 7461.022 - 7511.434: 71.8048% ( 32) 00:08:22.401 7511.434 - 7561.846: 71.9862% ( 31) 00:08:22.401 7561.846 - 7612.258: 72.1793% ( 33) 00:08:22.401 7612.258 - 7662.671: 72.3783% ( 34) 00:08:22.401 7662.671 - 7713.083: 72.6065% ( 39) 00:08:22.401 7713.083 - 7763.495: 72.8055% ( 34) 00:08:22.401 7763.495 - 7813.908: 72.9576% ( 26) 00:08:22.401 7813.908 - 7864.320: 73.1039% ( 25) 00:08:22.401 7864.320 - 7914.732: 73.2912% ( 32) 00:08:22.401 7914.732 - 7965.145: 73.4609% ( 29) 00:08:22.401 7965.145 - 8015.557: 73.6833% ( 38) 00:08:22.401 8015.557 - 8065.969: 73.8998% ( 37) 00:08:22.401 8065.969 - 8116.382: 74.1339% ( 40) 00:08:22.401 8116.382 - 8166.794: 74.3621% ( 39) 00:08:22.401 8166.794 - 8217.206: 74.7132% ( 60) 00:08:22.401 8217.206 - 8267.618: 75.0995% ( 66) 00:08:22.401 8267.618 - 8318.031: 75.4916% ( 67) 00:08:22.401 8318.031 - 8368.443: 75.9480% ( 78) 00:08:22.401 8368.443 - 8418.855: 76.4513% ( 86) 00:08:22.401 8418.855 - 8469.268: 76.9663% ( 88) 00:08:22.401 8469.268 - 8519.680: 77.6042% ( 109) 00:08:22.401 8519.680 - 8570.092: 78.2713% ( 114) 00:08:22.401 8570.092 - 8620.505: 78.9911% ( 123) 00:08:22.401 8620.505 - 8670.917: 79.7753% ( 134) 00:08:22.401 8670.917 - 8721.329: 80.6355% ( 147) 00:08:22.401 8721.329 - 8771.742: 81.4782% ( 144) 00:08:22.401 8771.742 - 8822.154: 82.3853% ( 155) 00:08:22.401 8822.154 - 8872.566: 83.1753% ( 135) 00:08:22.401 8872.566 - 8922.978: 84.0590% ( 151) 00:08:22.401 8922.978 - 8973.391: 84.8432% ( 134) 00:08:22.401 8973.391 - 9023.803: 85.7210% ( 150) 00:08:22.401 9023.803 - 9074.215: 86.4466% ( 124) 00:08:22.401 9074.215 - 9124.628: 87.2952% ( 145) 00:08:22.401 9124.628 - 9175.040: 87.9974% ( 120) 00:08:22.401 9175.040 - 9225.452: 88.6587% ( 113) 00:08:22.401 9225.452 - 9275.865: 89.2732% ( 105) 00:08:22.401 9275.865 - 9326.277: 89.8233% ( 94) 00:08:22.401 9326.277 - 9376.689: 90.3851% ( 96) 00:08:22.401 9376.689 - 9427.102: 90.8708% ( 83) 00:08:22.401 9427.102 - 9477.514: 91.3155% ( 76) 00:08:22.401 9477.514 - 9527.926: 91.6667% ( 60) 00:08:22.401 9527.926 - 9578.338: 92.0002% ( 57) 00:08:22.401 9578.338 - 9628.751: 92.2577% ( 44) 00:08:22.401 9628.751 - 9679.163: 92.4801% ( 38) 00:08:22.401 9679.163 - 9729.575: 92.6498% ( 29) 00:08:22.401 9729.575 - 9779.988: 92.8312% ( 31) 00:08:22.401 9779.988 - 9830.400: 92.9775% ( 25) 00:08:22.401 9830.400 - 9880.812: 93.0887% ( 19) 00:08:22.401 9880.812 - 9931.225: 93.1882% ( 17) 00:08:22.401 9931.225 - 9981.637: 93.2935% ( 18) 00:08:22.401 9981.637 - 10032.049: 93.3989% ( 18) 00:08:22.401 10032.049 - 10082.462: 93.5218% ( 21) 00:08:22.401 10082.462 - 10132.874: 93.6213% ( 17) 00:08:22.401 10132.874 - 10183.286: 93.7266% ( 18) 00:08:22.401 10183.286 - 10233.698: 93.8495% ( 21) 00:08:22.401 10233.698 - 10284.111: 93.9490% ( 17) 00:08:22.401 10284.111 - 10334.523: 94.0660% ( 20) 00:08:22.401 10334.523 - 10384.935: 94.1538% ( 15) 00:08:22.401 10384.935 - 10435.348: 94.2416% ( 15) 00:08:22.401 10435.348 - 10485.760: 94.3118% ( 12) 00:08:22.401 10485.760 - 10536.172: 94.3586% ( 8) 00:08:22.401 10536.172 - 10586.585: 94.3937% ( 6) 00:08:22.401 10586.585 - 10636.997: 94.4347% ( 7) 00:08:22.401 10636.997 - 10687.409: 94.4757% ( 7) 00:08:22.401 10687.409 - 10737.822: 94.5108% ( 6) 00:08:22.401 10737.822 - 10788.234: 94.5517% ( 7) 00:08:22.401 10788.234 - 10838.646: 94.5927% ( 7) 00:08:22.401 10838.646 - 10889.058: 94.6337% ( 7) 00:08:22.401 10889.058 - 10939.471: 94.6863% ( 9) 00:08:22.401 10939.471 - 10989.883: 94.7331% ( 8) 00:08:22.401 10989.883 - 11040.295: 94.7741% ( 7) 00:08:22.401 11040.295 - 11090.708: 94.8151% ( 7) 00:08:22.401 11090.708 - 11141.120: 94.8502% ( 6) 00:08:22.401 11141.120 - 11191.532: 94.8794% ( 5) 00:08:22.401 11191.532 - 11241.945: 94.8970% ( 3) 00:08:22.401 11241.945 - 11292.357: 94.9380% ( 7) 00:08:22.401 11292.357 - 11342.769: 94.9731% ( 6) 00:08:22.401 11342.769 - 11393.182: 95.0140% ( 7) 00:08:22.401 11393.182 - 11443.594: 95.0667% ( 9) 00:08:22.401 11443.594 - 11494.006: 95.1077% ( 7) 00:08:22.401 11494.006 - 11544.418: 95.1603% ( 9) 00:08:22.401 11544.418 - 11594.831: 95.2306% ( 12) 00:08:22.401 11594.831 - 11645.243: 95.3242% ( 16) 00:08:22.401 11645.243 - 11695.655: 95.4003% ( 13) 00:08:22.401 11695.655 - 11746.068: 95.4647% ( 11) 00:08:22.401 11746.068 - 11796.480: 95.5466% ( 14) 00:08:22.401 11796.480 - 11846.892: 95.6168% ( 12) 00:08:22.401 11846.892 - 11897.305: 95.6753% ( 10) 00:08:22.401 11897.305 - 11947.717: 95.7280% ( 9) 00:08:22.401 11947.717 - 11998.129: 95.7924% ( 11) 00:08:22.401 11998.129 - 12048.542: 95.8509% ( 10) 00:08:22.401 12048.542 - 12098.954: 95.9036% ( 9) 00:08:22.401 12098.954 - 12149.366: 95.9679% ( 11) 00:08:22.401 12149.366 - 12199.778: 96.0206% ( 9) 00:08:22.401 12199.778 - 12250.191: 96.0674% ( 8) 00:08:22.401 12250.191 - 12300.603: 96.1142% ( 8) 00:08:22.401 12300.603 - 12351.015: 96.1610% ( 8) 00:08:22.401 12351.015 - 12401.428: 96.2137% ( 9) 00:08:22.401 12401.428 - 12451.840: 96.3074% ( 16) 00:08:22.401 12451.840 - 12502.252: 96.4010% ( 16) 00:08:22.401 12502.252 - 12552.665: 96.5005% ( 17) 00:08:22.402 12552.665 - 12603.077: 96.6000% ( 17) 00:08:22.402 12603.077 - 12653.489: 96.7111% ( 19) 00:08:22.402 12653.489 - 12703.902: 96.8340% ( 21) 00:08:22.402 12703.902 - 12754.314: 96.9803% ( 25) 00:08:22.402 12754.314 - 12804.726: 97.1208% ( 24) 00:08:22.402 12804.726 - 12855.138: 97.2671% ( 25) 00:08:22.402 12855.138 - 12905.551: 97.4251% ( 27) 00:08:22.402 12905.551 - 13006.375: 97.7177% ( 50) 00:08:22.402 13006.375 - 13107.200: 98.0103% ( 50) 00:08:22.402 13107.200 - 13208.025: 98.3088% ( 51) 00:08:22.402 13208.025 - 13308.849: 98.6014% ( 50) 00:08:22.402 13308.849 - 13409.674: 98.8588% ( 44) 00:08:22.402 13409.674 - 13510.498: 99.0286% ( 29) 00:08:22.402 13510.498 - 13611.323: 99.1339% ( 18) 00:08:22.402 13611.323 - 13712.148: 99.2334% ( 17) 00:08:22.402 13712.148 - 13812.972: 99.2509% ( 3) 00:08:22.402 21979.766 - 22080.591: 99.2743% ( 4) 00:08:22.402 22080.591 - 22181.415: 99.3095% ( 6) 00:08:22.402 22181.415 - 22282.240: 99.3446% ( 6) 00:08:22.402 22282.240 - 22383.065: 99.3738% ( 5) 00:08:22.402 22383.065 - 22483.889: 99.4089% ( 6) 00:08:22.402 22483.889 - 22584.714: 99.4441% ( 6) 00:08:22.402 22584.714 - 22685.538: 99.4792% ( 6) 00:08:22.402 22685.538 - 22786.363: 99.5143% ( 6) 00:08:22.402 22786.363 - 22887.188: 99.5435% ( 5) 00:08:22.402 22887.188 - 22988.012: 99.5787% ( 6) 00:08:22.402 22988.012 - 23088.837: 99.6138% ( 6) 00:08:22.402 23088.837 - 23189.662: 99.6255% ( 2) 00:08:22.402 28230.892 - 28432.542: 99.6547% ( 5) 00:08:22.402 28432.542 - 28634.191: 99.7191% ( 11) 00:08:22.402 28634.191 - 28835.840: 99.7893% ( 12) 00:08:22.402 28835.840 - 29037.489: 99.8596% ( 12) 00:08:22.402 29037.489 - 29239.138: 99.9239% ( 11) 00:08:22.402 29239.138 - 29440.788: 99.9941% ( 12) 00:08:22.402 29440.788 - 29642.437: 100.0000% ( 1) 00:08:22.402 00:08:22.402 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.402 ============================================================================== 00:08:22.402 Range in us Cumulative IO count 00:08:22.402 3528.862 - 3554.068: 0.0292% ( 5) 00:08:22.402 3554.068 - 3579.274: 0.0700% ( 7) 00:08:22.402 3579.274 - 3604.480: 0.0816% ( 2) 00:08:22.402 3604.480 - 3629.686: 0.0875% ( 1) 00:08:22.402 3654.892 - 3680.098: 0.0991% ( 2) 00:08:22.402 3680.098 - 3705.305: 0.1049% ( 1) 00:08:22.402 3705.305 - 3730.511: 0.1166% ( 2) 00:08:22.402 3730.511 - 3755.717: 0.1283% ( 2) 00:08:22.402 3755.717 - 3780.923: 0.1458% ( 3) 00:08:22.402 3780.923 - 3806.129: 0.1574% ( 2) 00:08:22.402 3806.129 - 3831.335: 0.1691% ( 2) 00:08:22.402 3831.335 - 3856.542: 0.1866% ( 3) 00:08:22.402 3856.542 - 3881.748: 0.1982% ( 2) 00:08:22.402 3881.748 - 3906.954: 0.2157% ( 3) 00:08:22.402 3906.954 - 3932.160: 0.2274% ( 2) 00:08:22.402 3932.160 - 3957.366: 0.2390% ( 2) 00:08:22.402 3957.366 - 3982.572: 0.2507% ( 2) 00:08:22.402 3982.572 - 4007.778: 0.2624% ( 2) 00:08:22.402 4007.778 - 4032.985: 0.2740% ( 2) 00:08:22.402 4032.985 - 4058.191: 0.2857% ( 2) 00:08:22.402 4058.191 - 4083.397: 0.3032% ( 3) 00:08:22.402 4083.397 - 4108.603: 0.3090% ( 1) 00:08:22.402 4108.603 - 4133.809: 0.3207% ( 2) 00:08:22.402 4133.809 - 4159.015: 0.3323% ( 2) 00:08:22.402 4159.015 - 4184.222: 0.3440% ( 2) 00:08:22.402 4184.222 - 4209.428: 0.3615% ( 3) 00:08:22.402 4209.428 - 4234.634: 0.3731% ( 2) 00:08:22.402 5041.231 - 5066.437: 0.3848% ( 2) 00:08:22.402 5066.437 - 5091.643: 0.4489% ( 11) 00:08:22.402 5091.643 - 5116.849: 0.5014% ( 9) 00:08:22.402 5116.849 - 5142.055: 0.5189% ( 3) 00:08:22.402 5142.055 - 5167.262: 0.5247% ( 1) 00:08:22.402 5167.262 - 5192.468: 0.5306% ( 1) 00:08:22.402 5217.674 - 5242.880: 0.5364% ( 1) 00:08:22.402 5268.086 - 5293.292: 0.5539% ( 3) 00:08:22.402 5293.292 - 5318.498: 0.5655% ( 2) 00:08:22.402 5318.498 - 5343.705: 0.5772% ( 2) 00:08:22.402 5343.705 - 5368.911: 0.5889% ( 2) 00:08:22.402 5368.911 - 5394.117: 0.6005% ( 2) 00:08:22.402 5394.117 - 5419.323: 0.6180% ( 3) 00:08:22.402 5419.323 - 5444.529: 0.6238% ( 1) 00:08:22.402 5444.529 - 5469.735: 0.6355% ( 2) 00:08:22.402 5469.735 - 5494.942: 0.6530% ( 3) 00:08:22.402 5494.942 - 5520.148: 0.6646% ( 2) 00:08:22.402 5520.148 - 5545.354: 0.6705% ( 1) 00:08:22.402 5545.354 - 5570.560: 0.6880% ( 3) 00:08:22.402 5570.560 - 5595.766: 0.6996% ( 2) 00:08:22.402 5595.766 - 5620.972: 0.7113% ( 2) 00:08:22.402 5620.972 - 5646.178: 0.7229% ( 2) 00:08:22.402 5646.178 - 5671.385: 0.7346% ( 2) 00:08:22.402 5671.385 - 5696.591: 0.7463% ( 2) 00:08:22.402 5973.858 - 5999.065: 0.7929% ( 8) 00:08:22.402 5999.065 - 6024.271: 0.9795% ( 32) 00:08:22.402 6024.271 - 6049.477: 1.4226% ( 76) 00:08:22.402 6049.477 - 6074.683: 2.3554% ( 160) 00:08:22.402 6074.683 - 6099.889: 3.8188% ( 251) 00:08:22.402 6099.889 - 6125.095: 5.6845% ( 320) 00:08:22.402 6125.095 - 6150.302: 7.9000% ( 380) 00:08:22.402 6150.302 - 6175.508: 10.0163% ( 363) 00:08:22.402 6175.508 - 6200.714: 12.1327% ( 363) 00:08:22.402 6200.714 - 6225.920: 13.9867% ( 318) 00:08:22.402 6225.920 - 6251.126: 15.7708% ( 306) 00:08:22.402 6251.126 - 6276.332: 17.5898% ( 312) 00:08:22.402 6276.332 - 6301.538: 19.3447% ( 301) 00:08:22.402 6301.538 - 6326.745: 21.1112% ( 303) 00:08:22.402 6326.745 - 6351.951: 22.9711% ( 319) 00:08:22.402 6351.951 - 6377.157: 24.8309% ( 319) 00:08:22.402 6377.157 - 6402.363: 26.7316% ( 326) 00:08:22.402 6402.363 - 6427.569: 28.5798% ( 317) 00:08:22.402 6427.569 - 6452.775: 30.4338% ( 318) 00:08:22.402 6452.775 - 6503.188: 34.1943% ( 645) 00:08:22.402 6503.188 - 6553.600: 37.9023% ( 636) 00:08:22.403 6553.600 - 6604.012: 41.7036% ( 652) 00:08:22.403 6604.012 - 6654.425: 45.4000% ( 634) 00:08:22.403 6654.425 - 6704.837: 49.1838% ( 649) 00:08:22.403 6704.837 - 6755.249: 52.9734% ( 650) 00:08:22.403 6755.249 - 6805.662: 56.7572% ( 649) 00:08:22.403 6805.662 - 6856.074: 60.5527% ( 651) 00:08:22.403 6856.074 - 6906.486: 64.4181% ( 663) 00:08:22.403 6906.486 - 6956.898: 67.5373% ( 535) 00:08:22.403 6956.898 - 7007.311: 69.1989% ( 285) 00:08:22.403 7007.311 - 7057.723: 69.7470% ( 94) 00:08:22.403 7057.723 - 7108.135: 70.0443% ( 51) 00:08:22.403 7108.135 - 7158.548: 70.2833% ( 41) 00:08:22.403 7158.548 - 7208.960: 70.4932% ( 36) 00:08:22.403 7208.960 - 7259.372: 70.6856% ( 33) 00:08:22.403 7259.372 - 7309.785: 70.8547% ( 29) 00:08:22.403 7309.785 - 7360.197: 71.0354% ( 31) 00:08:22.403 7360.197 - 7410.609: 71.2687% ( 40) 00:08:22.403 7410.609 - 7461.022: 71.5019% ( 40) 00:08:22.403 7461.022 - 7511.434: 71.7234% ( 38) 00:08:22.403 7511.434 - 7561.846: 71.9333% ( 36) 00:08:22.403 7561.846 - 7612.258: 72.1315% ( 34) 00:08:22.403 7612.258 - 7662.671: 72.3356% ( 35) 00:08:22.403 7662.671 - 7713.083: 72.5280% ( 33) 00:08:22.403 7713.083 - 7763.495: 72.7437% ( 37) 00:08:22.403 7763.495 - 7813.908: 72.9128% ( 29) 00:08:22.403 7813.908 - 7864.320: 73.0877% ( 30) 00:08:22.403 7864.320 - 7914.732: 73.2509% ( 28) 00:08:22.403 7914.732 - 7965.145: 73.4317% ( 31) 00:08:22.403 7965.145 - 8015.557: 73.6357% ( 35) 00:08:22.403 8015.557 - 8065.969: 73.8748% ( 41) 00:08:22.403 8065.969 - 8116.382: 74.1255% ( 43) 00:08:22.403 8116.382 - 8166.794: 74.3937% ( 46) 00:08:22.403 8166.794 - 8217.206: 74.7201% ( 56) 00:08:22.403 8217.206 - 8267.618: 75.0583% ( 58) 00:08:22.403 8267.618 - 8318.031: 75.3906% ( 57) 00:08:22.403 8318.031 - 8368.443: 75.6763% ( 49) 00:08:22.403 8368.443 - 8418.855: 76.1019% ( 73) 00:08:22.403 8418.855 - 8469.268: 76.5217% ( 72) 00:08:22.403 8469.268 - 8519.680: 77.0814% ( 96) 00:08:22.403 8519.680 - 8570.092: 77.6586% ( 99) 00:08:22.403 8570.092 - 8620.505: 78.2882% ( 108) 00:08:22.403 8620.505 - 8670.917: 79.0054% ( 123) 00:08:22.403 8670.917 - 8721.329: 79.8274% ( 141) 00:08:22.403 8721.329 - 8771.742: 80.6903% ( 148) 00:08:22.403 8771.742 - 8822.154: 81.5357% ( 145) 00:08:22.403 8822.154 - 8872.566: 82.3811% ( 145) 00:08:22.403 8872.566 - 8922.978: 83.1740% ( 136) 00:08:22.403 8922.978 - 8973.391: 83.9727% ( 137) 00:08:22.403 8973.391 - 9023.803: 84.8239% ( 146) 00:08:22.403 9023.803 - 9074.215: 85.6343% ( 139) 00:08:22.403 9074.215 - 9124.628: 86.4622% ( 142) 00:08:22.403 9124.628 - 9175.040: 87.2551% ( 136) 00:08:22.403 9175.040 - 9225.452: 88.0306% ( 133) 00:08:22.403 9225.452 - 9275.865: 88.8118% ( 134) 00:08:22.403 9275.865 - 9326.277: 89.4764% ( 114) 00:08:22.403 9326.277 - 9376.689: 90.1353% ( 113) 00:08:22.403 9376.689 - 9427.102: 90.7066% ( 98) 00:08:22.403 9427.102 - 9477.514: 91.2255% ( 89) 00:08:22.403 9477.514 - 9527.926: 91.6628% ( 75) 00:08:22.403 9527.926 - 9578.338: 91.9951% ( 57) 00:08:22.403 9578.338 - 9628.751: 92.2924% ( 51) 00:08:22.403 9628.751 - 9679.163: 92.5373% ( 42) 00:08:22.403 9679.163 - 9729.575: 92.7414% ( 35) 00:08:22.403 9729.575 - 9779.988: 92.9513% ( 36) 00:08:22.403 9779.988 - 9830.400: 93.1320% ( 31) 00:08:22.403 9830.400 - 9880.812: 93.3302% ( 34) 00:08:22.403 9880.812 - 9931.225: 93.4643% ( 23) 00:08:22.403 9931.225 - 9981.637: 93.5809% ( 20) 00:08:22.403 9981.637 - 10032.049: 93.6975% ( 20) 00:08:22.403 10032.049 - 10082.462: 93.7850% ( 15) 00:08:22.403 10082.462 - 10132.874: 93.8433% ( 10) 00:08:22.403 10132.874 - 10183.286: 93.9016% ( 10) 00:08:22.403 10183.286 - 10233.698: 93.9715% ( 12) 00:08:22.403 10233.698 - 10284.111: 94.0473% ( 13) 00:08:22.403 10284.111 - 10334.523: 94.1173% ( 12) 00:08:22.403 10334.523 - 10384.935: 94.1931% ( 13) 00:08:22.403 10384.935 - 10435.348: 94.2631% ( 12) 00:08:22.403 10435.348 - 10485.760: 94.3214% ( 10) 00:08:22.403 10485.760 - 10536.172: 94.3738% ( 9) 00:08:22.403 10536.172 - 10586.585: 94.4321% ( 10) 00:08:22.403 10586.585 - 10636.997: 94.4846% ( 9) 00:08:22.403 10636.997 - 10687.409: 94.5487% ( 11) 00:08:22.403 10687.409 - 10737.822: 94.5954% ( 8) 00:08:22.403 10737.822 - 10788.234: 94.6537% ( 10) 00:08:22.403 10788.234 - 10838.646: 94.7178% ( 11) 00:08:22.403 10838.646 - 10889.058: 94.7761% ( 10) 00:08:22.403 10889.058 - 10939.471: 94.8228% ( 8) 00:08:22.403 10939.471 - 10989.883: 94.8636% ( 7) 00:08:22.403 10989.883 - 11040.295: 94.9044% ( 7) 00:08:22.403 11040.295 - 11090.708: 94.9394% ( 6) 00:08:22.403 11090.708 - 11141.120: 94.9743% ( 6) 00:08:22.403 11141.120 - 11191.532: 95.0035% ( 5) 00:08:22.403 11191.532 - 11241.945: 95.0268% ( 4) 00:08:22.403 11241.945 - 11292.357: 95.0443% ( 3) 00:08:22.403 11292.357 - 11342.769: 95.0618% ( 3) 00:08:22.403 11342.769 - 11393.182: 95.0851% ( 4) 00:08:22.403 11393.182 - 11443.594: 95.0968% ( 2) 00:08:22.403 11443.594 - 11494.006: 95.1143% ( 3) 00:08:22.403 11494.006 - 11544.418: 95.1318% ( 3) 00:08:22.403 11544.418 - 11594.831: 95.1493% ( 3) 00:08:22.403 11594.831 - 11645.243: 95.1551% ( 1) 00:08:22.403 11645.243 - 11695.655: 95.1901% ( 6) 00:08:22.403 11695.655 - 11746.068: 95.2017% ( 2) 00:08:22.403 11746.068 - 11796.480: 95.2425% ( 7) 00:08:22.403 11796.480 - 11846.892: 95.2717% ( 5) 00:08:22.403 11846.892 - 11897.305: 95.3242% ( 9) 00:08:22.403 11897.305 - 11947.717: 95.3591% ( 6) 00:08:22.403 11947.717 - 11998.129: 95.4000% ( 7) 00:08:22.403 11998.129 - 12048.542: 95.4699% ( 12) 00:08:22.403 12048.542 - 12098.954: 95.5515% ( 14) 00:08:22.403 12098.954 - 12149.366: 95.6215% ( 12) 00:08:22.403 12149.366 - 12199.778: 95.6973% ( 13) 00:08:22.403 12199.778 - 12250.191: 95.7847% ( 15) 00:08:22.403 12250.191 - 12300.603: 95.8780% ( 16) 00:08:22.403 12300.603 - 12351.015: 95.9946% ( 20) 00:08:22.403 12351.015 - 12401.428: 96.1287% ( 23) 00:08:22.403 12401.428 - 12451.840: 96.2628% ( 23) 00:08:22.403 12451.840 - 12502.252: 96.4028% ( 24) 00:08:22.403 12502.252 - 12552.665: 96.5602% ( 27) 00:08:22.403 12552.665 - 12603.077: 96.7059% ( 25) 00:08:22.403 12603.077 - 12653.489: 96.8575% ( 26) 00:08:22.403 12653.489 - 12703.902: 97.0149% ( 27) 00:08:22.403 12703.902 - 12754.314: 97.1723% ( 27) 00:08:22.403 12754.314 - 12804.726: 97.3472% ( 30) 00:08:22.403 12804.726 - 12855.138: 97.4988% ( 26) 00:08:22.403 12855.138 - 12905.551: 97.6679% ( 29) 00:08:22.403 12905.551 - 13006.375: 97.9944% ( 56) 00:08:22.403 13006.375 - 13107.200: 98.2859% ( 50) 00:08:22.403 13107.200 - 13208.025: 98.5308% ( 42) 00:08:22.403 13208.025 - 13308.849: 98.7465% ( 37) 00:08:22.403 13308.849 - 13409.674: 98.9097% ( 28) 00:08:22.403 13409.674 - 13510.498: 99.0322% ( 21) 00:08:22.403 13510.498 - 13611.323: 99.1371% ( 18) 00:08:22.403 13611.323 - 13712.148: 99.2246% ( 15) 00:08:22.403 13712.148 - 13812.972: 99.2537% ( 5) 00:08:22.403 17140.185 - 17241.009: 99.2596% ( 1) 00:08:22.403 17241.009 - 17341.834: 99.2945% ( 6) 00:08:22.403 17341.834 - 17442.658: 99.3295% ( 6) 00:08:22.403 17442.658 - 17543.483: 99.3587% ( 5) 00:08:22.403 17543.483 - 17644.308: 99.3937% ( 6) 00:08:22.403 17644.308 - 17745.132: 99.4286% ( 6) 00:08:22.403 17745.132 - 17845.957: 99.4636% ( 6) 00:08:22.403 17845.957 - 17946.782: 99.4986% ( 6) 00:08:22.403 17946.782 - 18047.606: 99.5336% ( 6) 00:08:22.403 18047.606 - 18148.431: 99.5686% ( 6) 00:08:22.403 18148.431 - 18249.255: 99.6035% ( 6) 00:08:22.403 18249.255 - 18350.080: 99.6269% ( 4) 00:08:22.403 22080.591 - 22181.415: 99.6502% ( 4) 00:08:22.403 22181.415 - 22282.240: 99.6793% ( 5) 00:08:22.403 22282.240 - 22383.065: 99.7143% ( 6) 00:08:22.403 22383.065 - 22483.889: 99.7493% ( 6) 00:08:22.403 22483.889 - 22584.714: 99.7843% ( 6) 00:08:22.403 22584.714 - 22685.538: 99.8134% ( 5) 00:08:22.403 22685.538 - 22786.363: 99.8484% ( 6) 00:08:22.403 22786.363 - 22887.188: 99.8834% ( 6) 00:08:22.403 22887.188 - 22988.012: 99.9184% ( 6) 00:08:22.403 22988.012 - 23088.837: 99.9534% ( 6) 00:08:22.403 23088.837 - 23189.662: 99.9883% ( 6) 00:08:22.403 23189.662 - 23290.486: 100.0000% ( 2) 00:08:22.403 00:08:22.403 00:40:14 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:23.778 Initializing NVMe Controllers 00:08:23.778 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.778 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.778 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.778 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.779 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:23.779 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:23.779 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:23.779 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:23.779 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:23.779 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:23.779 Initialization complete. Launching workers. 00:08:23.779 ======================================================== 00:08:23.779 Latency(us) 00:08:23.779 Device Information : IOPS MiB/s Average min max 00:08:23.779 PCIE (0000:00:10.0) NSID 1 from core 0: 17145.11 200.92 7468.27 4925.67 26215.21 00:08:23.779 PCIE (0000:00:11.0) NSID 1 from core 0: 17145.11 200.92 7461.60 4817.33 25704.91 00:08:23.779 PCIE (0000:00:13.0) NSID 1 from core 0: 17145.11 200.92 7454.76 4362.07 24873.29 00:08:23.779 PCIE (0000:00:12.0) NSID 1 from core 0: 17145.11 200.92 7447.68 4354.80 24495.58 00:08:23.779 PCIE (0000:00:12.0) NSID 2 from core 0: 17145.11 200.92 7440.41 4082.38 24339.52 00:08:23.779 PCIE (0000:00:12.0) NSID 3 from core 0: 17145.11 200.92 7433.27 3997.82 24025.95 00:08:23.779 ======================================================== 00:08:23.779 Total : 102870.65 1205.52 7451.00 3997.82 26215.21 00:08:23.779 00:08:23.779 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.779 ================================================================================= 00:08:23.779 1.00000% : 5948.652us 00:08:23.779 10.00000% : 6553.600us 00:08:23.779 25.00000% : 6755.249us 00:08:23.779 50.00000% : 7057.723us 00:08:23.779 75.00000% : 7410.609us 00:08:23.779 90.00000% : 8519.680us 00:08:23.779 95.00000% : 10889.058us 00:08:23.779 98.00000% : 13611.323us 00:08:23.779 99.00000% : 14518.745us 00:08:23.779 99.50000% : 17140.185us 00:08:23.779 99.90000% : 25710.277us 00:08:23.779 99.99000% : 26214.400us 00:08:23.779 99.99900% : 26416.049us 00:08:23.779 99.99990% : 26416.049us 00:08:23.779 99.99999% : 26416.049us 00:08:23.779 00:08:23.779 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.779 ================================================================================= 00:08:23.779 1.00000% : 6099.889us 00:08:23.779 10.00000% : 6654.425us 00:08:23.779 25.00000% : 6856.074us 00:08:23.779 50.00000% : 7057.723us 00:08:23.779 75.00000% : 7309.785us 00:08:23.779 90.00000% : 8418.855us 00:08:23.779 95.00000% : 10536.172us 00:08:23.779 98.00000% : 13409.674us 00:08:23.779 99.00000% : 15224.517us 00:08:23.779 99.50000% : 17543.483us 00:08:23.779 99.90000% : 25306.978us 00:08:23.779 99.99000% : 25710.277us 00:08:23.779 99.99900% : 25710.277us 00:08:23.779 99.99990% : 25710.277us 00:08:23.779 99.99999% : 25710.277us 00:08:23.779 00:08:23.779 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.779 ================================================================================= 00:08:23.779 1.00000% : 6049.477us 00:08:23.779 10.00000% : 6654.425us 00:08:23.779 25.00000% : 6856.074us 00:08:23.779 50.00000% : 7057.723us 00:08:23.779 75.00000% : 7309.785us 00:08:23.779 90.00000% : 8368.443us 00:08:23.779 95.00000% : 10536.172us 00:08:23.779 98.00000% : 13611.323us 00:08:23.779 99.00000% : 15224.517us 00:08:23.779 99.50000% : 18249.255us 00:08:23.779 99.90000% : 24500.382us 00:08:23.779 99.99000% : 24903.680us 00:08:23.779 99.99900% : 24903.680us 00:08:23.779 99.99990% : 24903.680us 00:08:23.779 99.99999% : 24903.680us 00:08:23.779 00:08:23.779 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.779 ================================================================================= 00:08:23.779 1.00000% : 6024.271us 00:08:23.779 10.00000% : 6654.425us 00:08:23.779 25.00000% : 6856.074us 00:08:23.779 50.00000% : 7007.311us 00:08:23.779 75.00000% : 7259.372us 00:08:23.779 90.00000% : 8318.031us 00:08:23.779 95.00000% : 10838.646us 00:08:23.779 98.00000% : 12905.551us 00:08:23.779 99.00000% : 15224.517us 00:08:23.779 99.50000% : 18148.431us 00:08:23.779 99.90000% : 24097.083us 00:08:23.779 99.99000% : 24500.382us 00:08:23.779 99.99900% : 24500.382us 00:08:23.779 99.99990% : 24500.382us 00:08:23.779 99.99999% : 24500.382us 00:08:23.779 00:08:23.779 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.779 ================================================================================= 00:08:23.779 1.00000% : 5999.065us 00:08:23.779 10.00000% : 6654.425us 00:08:23.779 25.00000% : 6856.074us 00:08:23.779 50.00000% : 7007.311us 00:08:23.779 75.00000% : 7259.372us 00:08:23.779 90.00000% : 8418.855us 00:08:23.779 95.00000% : 11342.769us 00:08:23.779 98.00000% : 12855.138us 00:08:23.779 99.00000% : 14720.394us 00:08:23.779 99.50000% : 17845.957us 00:08:23.779 99.90000% : 23996.258us 00:08:23.779 99.99000% : 24298.732us 00:08:23.779 99.99900% : 24399.557us 00:08:23.779 99.99990% : 24399.557us 00:08:23.779 99.99999% : 24399.557us 00:08:23.779 00:08:23.779 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.779 ================================================================================= 00:08:23.779 1.00000% : 5999.065us 00:08:23.779 10.00000% : 6654.425us 00:08:23.779 25.00000% : 6856.074us 00:08:23.779 50.00000% : 7057.723us 00:08:23.779 75.00000% : 7309.785us 00:08:23.779 90.00000% : 8368.443us 00:08:23.779 95.00000% : 11191.532us 00:08:23.779 98.00000% : 13308.849us 00:08:23.779 99.00000% : 14014.622us 00:08:23.779 99.50000% : 17644.308us 00:08:23.779 99.90000% : 23592.960us 00:08:23.779 99.99000% : 24097.083us 00:08:23.779 99.99900% : 24097.083us 00:08:23.779 99.99990% : 24097.083us 00:08:23.779 99.99999% : 24097.083us 00:08:23.779 00:08:23.779 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.779 ============================================================================== 00:08:23.779 Range in us Cumulative IO count 00:08:23.779 4915.200 - 4940.406: 0.0175% ( 3) 00:08:23.779 4940.406 - 4965.612: 0.0350% ( 3) 00:08:23.779 4965.612 - 4990.818: 0.0408% ( 1) 00:08:23.779 4990.818 - 5016.025: 0.0525% ( 2) 00:08:23.779 5016.025 - 5041.231: 0.0583% ( 1) 00:08:23.779 5041.231 - 5066.437: 0.0700% ( 2) 00:08:23.779 5066.437 - 5091.643: 0.0875% ( 3) 00:08:23.779 5091.643 - 5116.849: 0.0991% ( 2) 00:08:23.779 5116.849 - 5142.055: 0.1224% ( 4) 00:08:23.779 5142.055 - 5167.262: 0.1458% ( 4) 00:08:23.779 5167.262 - 5192.468: 0.1632% ( 3) 00:08:23.779 5192.468 - 5217.674: 0.1749% ( 2) 00:08:23.779 5217.674 - 5242.880: 0.1807% ( 1) 00:08:23.779 5242.880 - 5268.086: 0.1924% ( 2) 00:08:23.779 5268.086 - 5293.292: 0.2041% ( 2) 00:08:23.779 5293.292 - 5318.498: 0.2099% ( 1) 00:08:23.779 5318.498 - 5343.705: 0.2157% ( 1) 00:08:23.779 5343.705 - 5368.911: 0.2274% ( 2) 00:08:23.779 5368.911 - 5394.117: 0.3673% ( 24) 00:08:23.779 5394.117 - 5419.323: 0.3731% ( 1) 00:08:23.779 5545.354 - 5570.560: 0.3790% ( 1) 00:08:23.779 5570.560 - 5595.766: 0.3848% ( 1) 00:08:23.779 5595.766 - 5620.972: 0.3906% ( 1) 00:08:23.779 5620.972 - 5646.178: 0.4198% ( 5) 00:08:23.779 5646.178 - 5671.385: 0.4373% ( 3) 00:08:23.779 5671.385 - 5696.591: 0.4839% ( 8) 00:08:23.779 5696.591 - 5721.797: 0.5306% ( 8) 00:08:23.779 5721.797 - 5747.003: 0.5539% ( 4) 00:08:23.779 5747.003 - 5772.209: 0.6005% ( 8) 00:08:23.779 5772.209 - 5797.415: 0.6413% ( 7) 00:08:23.779 5797.415 - 5822.622: 0.6938% ( 9) 00:08:23.779 5822.622 - 5847.828: 0.7521% ( 10) 00:08:23.779 5847.828 - 5873.034: 0.8046% ( 9) 00:08:23.779 5873.034 - 5898.240: 0.8862% ( 14) 00:08:23.779 5898.240 - 5923.446: 0.9620% ( 13) 00:08:23.779 5923.446 - 5948.652: 1.0553% ( 16) 00:08:23.779 5948.652 - 5973.858: 1.1544% ( 17) 00:08:23.779 5973.858 - 5999.065: 1.2418% ( 15) 00:08:23.779 5999.065 - 6024.271: 1.3235% ( 14) 00:08:23.779 6024.271 - 6049.477: 1.4576% ( 23) 00:08:23.779 6049.477 - 6074.683: 1.6208% ( 28) 00:08:23.779 6074.683 - 6099.889: 1.7666% ( 25) 00:08:23.779 6099.889 - 6125.095: 1.9415% ( 30) 00:08:23.779 6125.095 - 6150.302: 2.1630% ( 38) 00:08:23.779 6150.302 - 6175.508: 2.3612% ( 34) 00:08:23.779 6175.508 - 6200.714: 2.6294% ( 46) 00:08:23.779 6200.714 - 6225.920: 2.8918% ( 45) 00:08:23.779 6225.920 - 6251.126: 3.0900% ( 34) 00:08:23.779 6251.126 - 6276.332: 3.3057% ( 37) 00:08:23.779 6276.332 - 6301.538: 3.6497% ( 59) 00:08:23.779 6301.538 - 6326.745: 4.0170% ( 63) 00:08:23.779 6326.745 - 6351.951: 4.4426% ( 73) 00:08:23.779 6351.951 - 6377.157: 4.9848% ( 93) 00:08:23.779 6377.157 - 6402.363: 5.5271% ( 93) 00:08:23.779 6402.363 - 6427.569: 6.2850% ( 130) 00:08:23.779 6427.569 - 6452.775: 7.1070% ( 141) 00:08:23.779 6452.775 - 6503.188: 9.1010% ( 342) 00:08:23.779 6503.188 - 6553.600: 11.3514% ( 386) 00:08:23.779 6553.600 - 6604.012: 14.2607% ( 499) 00:08:23.779 6604.012 - 6654.425: 17.2750% ( 517) 00:08:23.779 6654.425 - 6704.837: 21.4611% ( 718) 00:08:23.779 6704.837 - 6755.249: 26.0436% ( 786) 00:08:23.779 6755.249 - 6805.662: 31.0459% ( 858) 00:08:23.779 6805.662 - 6856.074: 36.1357% ( 873) 00:08:23.779 6856.074 - 6906.486: 40.7358% ( 789) 00:08:23.779 6906.486 - 6956.898: 45.3591% ( 793) 00:08:23.779 6956.898 - 7007.311: 49.2421% ( 666) 00:08:23.780 7007.311 - 7057.723: 53.2591% ( 689) 00:08:23.780 7057.723 - 7108.135: 57.4102% ( 712) 00:08:23.780 7108.135 - 7158.548: 60.7743% ( 577) 00:08:23.780 7158.548 - 7208.960: 63.9867% ( 551) 00:08:23.780 7208.960 - 7259.372: 67.4790% ( 599) 00:08:23.780 7259.372 - 7309.785: 70.5107% ( 520) 00:08:23.780 7309.785 - 7360.197: 72.9419% ( 417) 00:08:23.780 7360.197 - 7410.609: 75.1166% ( 373) 00:08:23.780 7410.609 - 7461.022: 77.2680% ( 369) 00:08:23.780 7461.022 - 7511.434: 79.1395% ( 321) 00:08:23.780 7511.434 - 7561.846: 80.9352% ( 308) 00:08:23.780 7561.846 - 7612.258: 82.4394% ( 258) 00:08:23.780 7612.258 - 7662.671: 83.3314% ( 153) 00:08:23.780 7662.671 - 7713.083: 84.2351% ( 155) 00:08:23.780 7713.083 - 7763.495: 84.8531% ( 106) 00:08:23.780 7763.495 - 7813.908: 85.3195% ( 80) 00:08:23.780 7813.908 - 7864.320: 85.7218% ( 69) 00:08:23.780 7864.320 - 7914.732: 86.0424% ( 55) 00:08:23.780 7914.732 - 7965.145: 86.4214% ( 65) 00:08:23.780 7965.145 - 8015.557: 86.9753% ( 95) 00:08:23.780 8015.557 - 8065.969: 87.2785% ( 52) 00:08:23.780 8065.969 - 8116.382: 87.7740% ( 85) 00:08:23.780 8116.382 - 8166.794: 88.1122% ( 58) 00:08:23.780 8166.794 - 8217.206: 88.4853% ( 64) 00:08:23.780 8217.206 - 8267.618: 88.7943% ( 53) 00:08:23.780 8267.618 - 8318.031: 89.1266% ( 57) 00:08:23.780 8318.031 - 8368.443: 89.4590% ( 57) 00:08:23.780 8368.443 - 8418.855: 89.6980% ( 41) 00:08:23.780 8418.855 - 8469.268: 89.9953% ( 51) 00:08:23.780 8469.268 - 8519.680: 90.3335% ( 58) 00:08:23.780 8519.680 - 8570.092: 90.5259% ( 33) 00:08:23.780 8570.092 - 8620.505: 90.7882% ( 45) 00:08:23.780 8620.505 - 8670.917: 91.0215% ( 40) 00:08:23.780 8670.917 - 8721.329: 91.2955% ( 47) 00:08:23.780 8721.329 - 8771.742: 91.5462% ( 43) 00:08:23.780 8771.742 - 8822.154: 91.8435% ( 51) 00:08:23.780 8822.154 - 8872.566: 92.0009% ( 27) 00:08:23.780 8872.566 - 8922.978: 92.1409% ( 24) 00:08:23.780 8922.978 - 8973.391: 92.3099% ( 29) 00:08:23.780 8973.391 - 9023.803: 92.4907% ( 31) 00:08:23.780 9023.803 - 9074.215: 92.6481% ( 27) 00:08:23.780 9074.215 - 9124.628: 92.8113% ( 28) 00:08:23.780 9124.628 - 9175.040: 92.9338% ( 21) 00:08:23.780 9175.040 - 9225.452: 93.0679% ( 23) 00:08:23.780 9225.452 - 9275.865: 93.1786% ( 19) 00:08:23.780 9275.865 - 9326.277: 93.2544% ( 13) 00:08:23.780 9326.277 - 9376.689: 93.3419% ( 15) 00:08:23.780 9376.689 - 9427.102: 93.4118% ( 12) 00:08:23.780 9427.102 - 9477.514: 93.4527% ( 7) 00:08:23.780 9477.514 - 9527.926: 93.4993% ( 8) 00:08:23.780 9527.926 - 9578.338: 93.5576% ( 10) 00:08:23.780 9578.338 - 9628.751: 93.6042% ( 8) 00:08:23.780 9628.751 - 9679.163: 93.6451% ( 7) 00:08:23.780 9679.163 - 9729.575: 93.7034% ( 10) 00:08:23.780 9729.575 - 9779.988: 93.7558% ( 9) 00:08:23.780 9779.988 - 9830.400: 93.7966% ( 7) 00:08:23.780 9830.400 - 9880.812: 93.8200% ( 4) 00:08:23.780 9880.812 - 9931.225: 93.8433% ( 4) 00:08:23.780 9931.225 - 9981.637: 93.8608% ( 3) 00:08:23.780 9981.637 - 10032.049: 93.8724% ( 2) 00:08:23.780 10032.049 - 10082.462: 93.9132% ( 7) 00:08:23.780 10082.462 - 10132.874: 93.9366% ( 4) 00:08:23.780 10132.874 - 10183.286: 93.9890% ( 9) 00:08:23.780 10183.286 - 10233.698: 94.0182% ( 5) 00:08:23.780 10233.698 - 10284.111: 94.0707% ( 9) 00:08:23.780 10284.111 - 10334.523: 94.1698% ( 17) 00:08:23.780 10334.523 - 10384.935: 94.2631% ( 16) 00:08:23.780 10384.935 - 10435.348: 94.3622% ( 17) 00:08:23.780 10435.348 - 10485.760: 94.4555% ( 16) 00:08:23.780 10485.760 - 10536.172: 94.5546% ( 17) 00:08:23.780 10536.172 - 10586.585: 94.6129% ( 10) 00:08:23.780 10586.585 - 10636.997: 94.6712% ( 10) 00:08:23.780 10636.997 - 10687.409: 94.7645% ( 16) 00:08:23.780 10687.409 - 10737.822: 94.8286% ( 11) 00:08:23.780 10737.822 - 10788.234: 94.8927% ( 11) 00:08:23.780 10788.234 - 10838.646: 94.9452% ( 9) 00:08:23.780 10838.646 - 10889.058: 95.0385% ( 16) 00:08:23.780 10889.058 - 10939.471: 95.0910% ( 9) 00:08:23.780 10939.471 - 10989.883: 95.1551% ( 11) 00:08:23.780 10989.883 - 11040.295: 95.1901% ( 6) 00:08:23.780 11040.295 - 11090.708: 95.2367% ( 8) 00:08:23.780 11090.708 - 11141.120: 95.2775% ( 7) 00:08:23.780 11141.120 - 11191.532: 95.3591% ( 14) 00:08:23.780 11191.532 - 11241.945: 95.4291% ( 12) 00:08:23.780 11241.945 - 11292.357: 95.5224% ( 16) 00:08:23.780 11292.357 - 11342.769: 95.5807% ( 10) 00:08:23.780 11342.769 - 11393.182: 95.6565% ( 13) 00:08:23.780 11393.182 - 11443.594: 95.6973% ( 7) 00:08:23.780 11443.594 - 11494.006: 95.7439% ( 8) 00:08:23.780 11494.006 - 11544.418: 95.7906% ( 8) 00:08:23.780 11544.418 - 11594.831: 95.8780% ( 15) 00:08:23.780 11594.831 - 11645.243: 95.9655% ( 15) 00:08:23.780 11645.243 - 11695.655: 96.0646% ( 17) 00:08:23.780 11695.655 - 11746.068: 96.1112% ( 8) 00:08:23.780 11746.068 - 11796.480: 96.1521% ( 7) 00:08:23.780 11796.480 - 11846.892: 96.1754% ( 4) 00:08:23.780 11846.892 - 11897.305: 96.2045% ( 5) 00:08:23.780 11897.305 - 11947.717: 96.2337% ( 5) 00:08:23.780 11947.717 - 11998.129: 96.2628% ( 5) 00:08:23.780 11998.129 - 12048.542: 96.2803% ( 3) 00:08:23.780 12048.542 - 12098.954: 96.3153% ( 6) 00:08:23.780 12098.954 - 12149.366: 96.3444% ( 5) 00:08:23.780 12149.366 - 12199.778: 96.3911% ( 8) 00:08:23.780 12199.778 - 12250.191: 96.4086% ( 3) 00:08:23.780 12250.191 - 12300.603: 96.4319% ( 4) 00:08:23.780 12300.603 - 12351.015: 96.4552% ( 4) 00:08:23.780 12351.015 - 12401.428: 96.4727% ( 3) 00:08:23.780 12401.428 - 12451.840: 96.4902% ( 3) 00:08:23.780 12451.840 - 12502.252: 96.5135% ( 4) 00:08:23.780 12502.252 - 12552.665: 96.5718% ( 10) 00:08:23.780 12552.665 - 12603.077: 96.6301% ( 10) 00:08:23.780 12603.077 - 12653.489: 96.6826% ( 9) 00:08:23.780 12653.489 - 12703.902: 96.7526% ( 12) 00:08:23.780 12703.902 - 12754.314: 96.8342% ( 14) 00:08:23.780 12754.314 - 12804.726: 96.9042% ( 12) 00:08:23.780 12804.726 - 12855.138: 96.9683% ( 11) 00:08:23.780 12855.138 - 12905.551: 97.0324% ( 11) 00:08:23.780 12905.551 - 13006.375: 97.1957% ( 28) 00:08:23.780 13006.375 - 13107.200: 97.2773% ( 14) 00:08:23.780 13107.200 - 13208.025: 97.4639% ( 32) 00:08:23.780 13208.025 - 13308.849: 97.5746% ( 19) 00:08:23.780 13308.849 - 13409.674: 97.6912% ( 20) 00:08:23.780 13409.674 - 13510.498: 97.8545% ( 28) 00:08:23.780 13510.498 - 13611.323: 98.0119% ( 27) 00:08:23.780 13611.323 - 13712.148: 98.1635% ( 26) 00:08:23.780 13712.148 - 13812.972: 98.3442% ( 31) 00:08:23.780 13812.972 - 13913.797: 98.4958% ( 26) 00:08:23.780 13913.797 - 14014.622: 98.6066% ( 19) 00:08:23.780 14014.622 - 14115.446: 98.6882% ( 14) 00:08:23.780 14115.446 - 14216.271: 98.7523% ( 11) 00:08:23.780 14216.271 - 14317.095: 98.8573% ( 18) 00:08:23.780 14317.095 - 14417.920: 98.9506% ( 16) 00:08:23.780 14417.920 - 14518.745: 99.0438% ( 16) 00:08:23.780 14518.745 - 14619.569: 99.1138% ( 12) 00:08:23.780 14619.569 - 14720.394: 99.1488% ( 6) 00:08:23.780 14720.394 - 14821.218: 99.1546% ( 1) 00:08:23.780 14821.218 - 14922.043: 99.1779% ( 4) 00:08:23.780 14922.043 - 15022.868: 99.2013% ( 4) 00:08:23.780 15022.868 - 15123.692: 99.2362% ( 6) 00:08:23.780 15123.692 - 15224.517: 99.2537% ( 3) 00:08:23.780 16333.588 - 16434.412: 99.2596% ( 1) 00:08:23.780 16636.062 - 16736.886: 99.4286% ( 29) 00:08:23.780 16736.886 - 16837.711: 99.4520% ( 4) 00:08:23.780 16837.711 - 16938.535: 99.4636% ( 2) 00:08:23.780 16938.535 - 17039.360: 99.4928% ( 5) 00:08:23.780 17039.360 - 17140.185: 99.5103% ( 3) 00:08:23.780 17140.185 - 17241.009: 99.5336% ( 4) 00:08:23.780 17241.009 - 17341.834: 99.5569% ( 4) 00:08:23.780 17341.834 - 17442.658: 99.5627% ( 1) 00:08:23.780 17442.658 - 17543.483: 99.6094% ( 8) 00:08:23.780 17543.483 - 17644.308: 99.6210% ( 2) 00:08:23.780 17946.782 - 18047.606: 99.6269% ( 1) 00:08:23.780 23693.785 - 23794.609: 99.6385% ( 2) 00:08:23.780 23794.609 - 23895.434: 99.6502% ( 2) 00:08:23.780 23895.434 - 23996.258: 99.6618% ( 2) 00:08:23.780 23996.258 - 24097.083: 99.6852% ( 4) 00:08:23.780 24097.083 - 24197.908: 99.7027% ( 3) 00:08:23.780 24197.908 - 24298.732: 99.7376% ( 6) 00:08:23.780 24298.732 - 24399.557: 99.7610% ( 4) 00:08:23.780 24399.557 - 24500.382: 99.7785% ( 3) 00:08:23.780 24500.382 - 24601.206: 99.7901% ( 2) 00:08:23.780 24702.031 - 24802.855: 99.7959% ( 1) 00:08:23.780 24802.855 - 24903.680: 99.8018% ( 1) 00:08:23.780 25004.505 - 25105.329: 99.8251% ( 4) 00:08:23.780 25105.329 - 25206.154: 99.8426% ( 3) 00:08:23.780 25206.154 - 25306.978: 99.8542% ( 2) 00:08:23.780 25306.978 - 25407.803: 99.8659% ( 2) 00:08:23.780 25407.803 - 25508.628: 99.8892% ( 4) 00:08:23.780 25508.628 - 25609.452: 99.8951% ( 1) 00:08:23.780 25609.452 - 25710.277: 99.9125% ( 3) 00:08:23.780 25710.277 - 25811.102: 99.9242% ( 2) 00:08:23.780 25811.102 - 26012.751: 99.9592% ( 6) 00:08:23.780 26012.751 - 26214.400: 99.9942% ( 6) 00:08:23.780 26214.400 - 26416.049: 100.0000% ( 1) 00:08:23.780 00:08:23.780 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.780 ============================================================================== 00:08:23.780 Range in us Cumulative IO count 00:08:23.780 4814.375 - 4839.582: 0.0058% ( 1) 00:08:23.780 4839.582 - 4864.788: 0.0175% ( 2) 00:08:23.780 4864.788 - 4889.994: 0.0292% ( 2) 00:08:23.780 4889.994 - 4915.200: 0.0408% ( 2) 00:08:23.780 4915.200 - 4940.406: 0.0583% ( 3) 00:08:23.780 4940.406 - 4965.612: 0.0700% ( 2) 00:08:23.781 4965.612 - 4990.818: 0.0875% ( 3) 00:08:23.781 4990.818 - 5016.025: 0.1049% ( 3) 00:08:23.781 5016.025 - 5041.231: 0.1866% ( 14) 00:08:23.781 5041.231 - 5066.437: 0.2740% ( 15) 00:08:23.781 5066.437 - 5091.643: 0.3148% ( 7) 00:08:23.781 5091.643 - 5116.849: 0.3440% ( 5) 00:08:23.781 5116.849 - 5142.055: 0.3556% ( 2) 00:08:23.781 5142.055 - 5167.262: 0.3615% ( 1) 00:08:23.781 5167.262 - 5192.468: 0.3731% ( 2) 00:08:23.781 5646.178 - 5671.385: 0.3790% ( 1) 00:08:23.781 5671.385 - 5696.591: 0.3848% ( 1) 00:08:23.781 5797.415 - 5822.622: 0.3906% ( 1) 00:08:23.781 5847.828 - 5873.034: 0.4023% ( 2) 00:08:23.781 5873.034 - 5898.240: 0.4198% ( 3) 00:08:23.781 5898.240 - 5923.446: 0.4431% ( 4) 00:08:23.781 5923.446 - 5948.652: 0.4897% ( 8) 00:08:23.781 5948.652 - 5973.858: 0.5364% ( 8) 00:08:23.781 5973.858 - 5999.065: 0.5655% ( 5) 00:08:23.781 5999.065 - 6024.271: 0.6063% ( 7) 00:08:23.781 6024.271 - 6049.477: 0.6588% ( 9) 00:08:23.781 6049.477 - 6074.683: 0.7987% ( 24) 00:08:23.781 6074.683 - 6099.889: 1.4925% ( 119) 00:08:23.781 6099.889 - 6125.095: 2.1047% ( 105) 00:08:23.781 6125.095 - 6150.302: 2.2621% ( 27) 00:08:23.781 6150.302 - 6175.508: 2.3962% ( 23) 00:08:23.781 6175.508 - 6200.714: 2.5187% ( 21) 00:08:23.781 6200.714 - 6225.920: 3.0609% ( 93) 00:08:23.781 6225.920 - 6251.126: 3.1833% ( 21) 00:08:23.781 6251.126 - 6276.332: 3.4981% ( 54) 00:08:23.781 6276.332 - 6301.538: 4.0928% ( 102) 00:08:23.781 6301.538 - 6326.745: 4.2386% ( 25) 00:08:23.781 6326.745 - 6351.951: 4.3435% ( 18) 00:08:23.781 6351.951 - 6377.157: 4.4718% ( 22) 00:08:23.781 6377.157 - 6402.363: 4.7167% ( 42) 00:08:23.781 6402.363 - 6427.569: 5.1364% ( 72) 00:08:23.781 6427.569 - 6452.775: 5.5445% ( 70) 00:08:23.781 6452.775 - 6503.188: 6.2034% ( 113) 00:08:23.781 6503.188 - 6553.600: 7.3228% ( 192) 00:08:23.781 6553.600 - 6604.012: 9.3167% ( 342) 00:08:23.781 6604.012 - 6654.425: 11.9345% ( 449) 00:08:23.781 6654.425 - 6704.837: 14.7796% ( 488) 00:08:23.781 6704.837 - 6755.249: 18.5051% ( 639) 00:08:23.781 6755.249 - 6805.662: 23.0177% ( 774) 00:08:23.781 6805.662 - 6856.074: 28.2649% ( 900) 00:08:23.781 6856.074 - 6906.486: 34.6432% ( 1094) 00:08:23.781 6906.486 - 6956.898: 41.5403% ( 1183) 00:08:23.781 6956.898 - 7007.311: 49.5569% ( 1375) 00:08:23.781 7007.311 - 7057.723: 56.1917% ( 1138) 00:08:23.781 7057.723 - 7108.135: 62.9781% ( 1164) 00:08:23.781 7108.135 - 7158.548: 67.6364% ( 799) 00:08:23.781 7158.548 - 7208.960: 71.5194% ( 666) 00:08:23.781 7208.960 - 7259.372: 74.7726% ( 558) 00:08:23.781 7259.372 - 7309.785: 77.3146% ( 436) 00:08:23.781 7309.785 - 7360.197: 79.2094% ( 325) 00:08:23.781 7360.197 - 7410.609: 80.3813% ( 201) 00:08:23.781 7410.609 - 7461.022: 81.7106% ( 228) 00:08:23.781 7461.022 - 7511.434: 82.5676% ( 147) 00:08:23.781 7511.434 - 7561.846: 83.2439% ( 116) 00:08:23.781 7561.846 - 7612.258: 84.0368% ( 136) 00:08:23.781 7612.258 - 7662.671: 84.5732% ( 92) 00:08:23.781 7662.671 - 7713.083: 85.1038% ( 91) 00:08:23.781 7713.083 - 7763.495: 85.7043% ( 103) 00:08:23.781 7763.495 - 7813.908: 86.1007% ( 68) 00:08:23.781 7813.908 - 7864.320: 86.4739% ( 64) 00:08:23.781 7864.320 - 7914.732: 86.8412% ( 63) 00:08:23.781 7914.732 - 7965.145: 87.3542% ( 88) 00:08:23.781 7965.145 - 8015.557: 87.6574% ( 52) 00:08:23.781 8015.557 - 8065.969: 87.9722% ( 54) 00:08:23.781 8065.969 - 8116.382: 88.3629% ( 67) 00:08:23.781 8116.382 - 8166.794: 88.7593% ( 68) 00:08:23.781 8166.794 - 8217.206: 88.9634% ( 35) 00:08:23.781 8217.206 - 8267.618: 89.1791% ( 37) 00:08:23.781 8267.618 - 8318.031: 89.8554% ( 116) 00:08:23.781 8318.031 - 8368.443: 89.9662% ( 19) 00:08:23.781 8368.443 - 8418.855: 90.0653% ( 17) 00:08:23.781 8418.855 - 8469.268: 90.1994% ( 23) 00:08:23.781 8469.268 - 8519.680: 90.3218% ( 21) 00:08:23.781 8519.680 - 8570.092: 90.4792% ( 27) 00:08:23.781 8570.092 - 8620.505: 90.6075% ( 22) 00:08:23.781 8620.505 - 8670.917: 90.7824% ( 30) 00:08:23.781 8670.917 - 8721.329: 90.9748% ( 33) 00:08:23.781 8721.329 - 8771.742: 91.0798% ( 18) 00:08:23.781 8771.742 - 8822.154: 91.2197% ( 24) 00:08:23.781 8822.154 - 8872.566: 91.4646% ( 42) 00:08:23.781 8872.566 - 8922.978: 91.6161% ( 26) 00:08:23.781 8922.978 - 8973.391: 91.7269% ( 19) 00:08:23.781 8973.391 - 9023.803: 91.9076% ( 31) 00:08:23.781 9023.803 - 9074.215: 92.0651% ( 27) 00:08:23.781 9074.215 - 9124.628: 92.2225% ( 27) 00:08:23.781 9124.628 - 9175.040: 92.3566% ( 23) 00:08:23.781 9175.040 - 9225.452: 92.4149% ( 10) 00:08:23.781 9225.452 - 9275.865: 92.4907% ( 13) 00:08:23.781 9275.865 - 9326.277: 92.5606% ( 12) 00:08:23.781 9326.277 - 9376.689: 92.6189% ( 10) 00:08:23.781 9376.689 - 9427.102: 92.6714% ( 9) 00:08:23.781 9427.102 - 9477.514: 92.7064% ( 6) 00:08:23.781 9477.514 - 9527.926: 92.7472% ( 7) 00:08:23.781 9527.926 - 9578.338: 92.8055% ( 10) 00:08:23.781 9578.338 - 9628.751: 92.8696% ( 11) 00:08:23.781 9628.751 - 9679.163: 92.9221% ( 9) 00:08:23.781 9679.163 - 9729.575: 92.9921% ( 12) 00:08:23.781 9729.575 - 9779.988: 93.0504% ( 10) 00:08:23.781 9779.988 - 9830.400: 93.1378% ( 15) 00:08:23.781 9830.400 - 9880.812: 93.2078% ( 12) 00:08:23.781 9880.812 - 9931.225: 93.2894% ( 14) 00:08:23.781 9931.225 - 9981.637: 93.3652% ( 13) 00:08:23.781 9981.637 - 10032.049: 93.4701% ( 18) 00:08:23.781 10032.049 - 10082.462: 93.6567% ( 32) 00:08:23.781 10082.462 - 10132.874: 93.8724% ( 37) 00:08:23.781 10132.874 - 10183.286: 94.0532% ( 31) 00:08:23.781 10183.286 - 10233.698: 94.2048% ( 26) 00:08:23.781 10233.698 - 10284.111: 94.3505% ( 25) 00:08:23.781 10284.111 - 10334.523: 94.5079% ( 27) 00:08:23.781 10334.523 - 10384.935: 94.6595% ( 26) 00:08:23.781 10384.935 - 10435.348: 94.8228% ( 28) 00:08:23.781 10435.348 - 10485.760: 94.9277% ( 18) 00:08:23.781 10485.760 - 10536.172: 95.0326% ( 18) 00:08:23.781 10536.172 - 10586.585: 95.1318% ( 17) 00:08:23.781 10586.585 - 10636.997: 95.2425% ( 19) 00:08:23.781 10636.997 - 10687.409: 95.3650% ( 21) 00:08:23.781 10687.409 - 10737.822: 95.5690% ( 35) 00:08:23.781 10737.822 - 10788.234: 95.6798% ( 19) 00:08:23.781 10788.234 - 10838.646: 95.7789% ( 17) 00:08:23.781 10838.646 - 10889.058: 95.8256% ( 8) 00:08:23.781 10889.058 - 10939.471: 95.8839% ( 10) 00:08:23.781 10939.471 - 10989.883: 95.9188% ( 6) 00:08:23.781 10989.883 - 11040.295: 95.9538% ( 6) 00:08:23.781 11040.295 - 11090.708: 95.9946% ( 7) 00:08:23.781 11090.708 - 11141.120: 96.0296% ( 6) 00:08:23.781 11141.120 - 11191.532: 96.0763% ( 8) 00:08:23.781 11191.532 - 11241.945: 96.1054% ( 5) 00:08:23.781 11241.945 - 11292.357: 96.1462% ( 7) 00:08:23.781 11292.357 - 11342.769: 96.1870% ( 7) 00:08:23.781 11342.769 - 11393.182: 96.2045% ( 3) 00:08:23.781 11393.182 - 11443.594: 96.2162% ( 2) 00:08:23.781 11443.594 - 11494.006: 96.2278% ( 2) 00:08:23.781 11494.006 - 11544.418: 96.2337% ( 1) 00:08:23.781 11544.418 - 11594.831: 96.2453% ( 2) 00:08:23.781 11594.831 - 11645.243: 96.2512% ( 1) 00:08:23.781 11645.243 - 11695.655: 96.2803% ( 5) 00:08:23.781 11695.655 - 11746.068: 96.3211% ( 7) 00:08:23.781 11746.068 - 11796.480: 96.3619% ( 7) 00:08:23.781 11796.480 - 11846.892: 96.4028% ( 7) 00:08:23.781 11846.892 - 11897.305: 96.4436% ( 7) 00:08:23.781 11897.305 - 11947.717: 96.4844% ( 7) 00:08:23.781 11947.717 - 11998.129: 96.5427% ( 10) 00:08:23.781 11998.129 - 12048.542: 96.6943% ( 26) 00:08:23.781 12048.542 - 12098.954: 96.7351% ( 7) 00:08:23.781 12098.954 - 12149.366: 96.7526% ( 3) 00:08:23.781 12149.366 - 12199.778: 96.7817% ( 5) 00:08:23.781 12199.778 - 12250.191: 96.8225% ( 7) 00:08:23.781 12250.191 - 12300.603: 96.8575% ( 6) 00:08:23.781 12300.603 - 12351.015: 96.8867% ( 5) 00:08:23.781 12351.015 - 12401.428: 96.9275% ( 7) 00:08:23.781 12401.428 - 12451.840: 96.9683% ( 7) 00:08:23.781 12451.840 - 12502.252: 96.9974% ( 5) 00:08:23.781 12502.252 - 12552.665: 97.0674% ( 12) 00:08:23.781 12552.665 - 12603.077: 97.1374% ( 12) 00:08:23.781 12603.077 - 12653.489: 97.2423% ( 18) 00:08:23.781 12653.489 - 12703.902: 97.2948% ( 9) 00:08:23.781 12703.902 - 12754.314: 97.3647% ( 12) 00:08:23.781 12754.314 - 12804.726: 97.4405% ( 13) 00:08:23.781 12804.726 - 12855.138: 97.4988% ( 10) 00:08:23.781 12855.138 - 12905.551: 97.5688% ( 12) 00:08:23.781 12905.551 - 13006.375: 97.6329% ( 11) 00:08:23.781 13006.375 - 13107.200: 97.6854% ( 9) 00:08:23.781 13107.200 - 13208.025: 97.7379% ( 9) 00:08:23.781 13208.025 - 13308.849: 97.8370% ( 17) 00:08:23.781 13308.849 - 13409.674: 98.0819% ( 42) 00:08:23.781 13409.674 - 13510.498: 98.1518% ( 12) 00:08:23.781 13510.498 - 13611.323: 98.2276% ( 13) 00:08:23.781 13611.323 - 13712.148: 98.2801% ( 9) 00:08:23.781 13712.148 - 13812.972: 98.3209% ( 7) 00:08:23.781 13812.972 - 13913.797: 98.3559% ( 6) 00:08:23.781 13913.797 - 14014.622: 98.3967% ( 7) 00:08:23.781 14014.622 - 14115.446: 98.4375% ( 7) 00:08:23.781 14115.446 - 14216.271: 98.4783% ( 7) 00:08:23.781 14216.271 - 14317.095: 98.5075% ( 5) 00:08:23.781 14317.095 - 14417.920: 98.5366% ( 5) 00:08:23.781 14417.920 - 14518.745: 98.5658% ( 5) 00:08:23.781 14518.745 - 14619.569: 98.5891% ( 4) 00:08:23.781 14619.569 - 14720.394: 98.6124% ( 4) 00:08:23.781 14720.394 - 14821.218: 98.6649% ( 9) 00:08:23.781 14821.218 - 14922.043: 98.7115% ( 8) 00:08:23.781 14922.043 - 15022.868: 98.7523% ( 7) 00:08:23.781 15022.868 - 15123.692: 98.7990% ( 8) 00:08:23.781 15123.692 - 15224.517: 99.1313% ( 57) 00:08:23.782 15224.517 - 15325.342: 99.1896% ( 10) 00:08:23.782 15325.342 - 15426.166: 99.2362% ( 8) 00:08:23.782 15426.166 - 15526.991: 99.2537% ( 3) 00:08:23.782 16636.062 - 16736.886: 99.2596% ( 1) 00:08:23.782 16736.886 - 16837.711: 99.2887% ( 5) 00:08:23.782 16837.711 - 16938.535: 99.3237% ( 6) 00:08:23.782 16938.535 - 17039.360: 99.3587% ( 6) 00:08:23.782 17039.360 - 17140.185: 99.3937% ( 6) 00:08:23.782 17140.185 - 17241.009: 99.4286% ( 6) 00:08:23.782 17241.009 - 17341.834: 99.4578% ( 5) 00:08:23.782 17341.834 - 17442.658: 99.4928% ( 6) 00:08:23.782 17442.658 - 17543.483: 99.5278% ( 6) 00:08:23.782 17543.483 - 17644.308: 99.5569% ( 5) 00:08:23.782 17644.308 - 17745.132: 99.5919% ( 6) 00:08:23.782 17745.132 - 17845.957: 99.6269% ( 6) 00:08:23.782 24097.083 - 24197.908: 99.6502% ( 4) 00:08:23.782 24197.908 - 24298.732: 99.6735% ( 4) 00:08:23.782 24298.732 - 24399.557: 99.6910% ( 3) 00:08:23.782 24399.557 - 24500.382: 99.7143% ( 4) 00:08:23.782 24500.382 - 24601.206: 99.7376% ( 4) 00:08:23.782 24601.206 - 24702.031: 99.7610% ( 4) 00:08:23.782 24702.031 - 24802.855: 99.7901% ( 5) 00:08:23.782 24802.855 - 24903.680: 99.8134% ( 4) 00:08:23.782 24903.680 - 25004.505: 99.8368% ( 4) 00:08:23.782 25004.505 - 25105.329: 99.8601% ( 4) 00:08:23.782 25105.329 - 25206.154: 99.8834% ( 4) 00:08:23.782 25206.154 - 25306.978: 99.9067% ( 4) 00:08:23.782 25306.978 - 25407.803: 99.9300% ( 4) 00:08:23.782 25407.803 - 25508.628: 99.9534% ( 4) 00:08:23.782 25508.628 - 25609.452: 99.9767% ( 4) 00:08:23.782 25609.452 - 25710.277: 100.0000% ( 4) 00:08:23.782 00:08:23.782 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.782 ============================================================================== 00:08:23.782 Range in us Cumulative IO count 00:08:23.782 4360.665 - 4385.871: 0.0058% ( 1) 00:08:23.782 4537.108 - 4562.314: 0.0175% ( 2) 00:08:23.782 4562.314 - 4587.520: 0.0292% ( 2) 00:08:23.782 4587.520 - 4612.726: 0.0466% ( 3) 00:08:23.782 4612.726 - 4637.932: 0.0700% ( 4) 00:08:23.782 4637.932 - 4663.138: 0.0933% ( 4) 00:08:23.782 4663.138 - 4688.345: 0.1166% ( 4) 00:08:23.782 4688.345 - 4713.551: 0.1982% ( 14) 00:08:23.782 4713.551 - 4738.757: 0.2682% ( 12) 00:08:23.782 4738.757 - 4763.963: 0.2915% ( 4) 00:08:23.782 4763.963 - 4789.169: 0.3148% ( 4) 00:08:23.782 4789.169 - 4814.375: 0.3265% ( 2) 00:08:23.782 4814.375 - 4839.582: 0.3382% ( 2) 00:08:23.782 4839.582 - 4864.788: 0.3498% ( 2) 00:08:23.782 4864.788 - 4889.994: 0.3615% ( 2) 00:08:23.782 4889.994 - 4915.200: 0.3673% ( 1) 00:08:23.782 4915.200 - 4940.406: 0.3731% ( 1) 00:08:23.782 5620.972 - 5646.178: 0.3790% ( 1) 00:08:23.782 5646.178 - 5671.385: 0.3906% ( 2) 00:08:23.782 5696.591 - 5721.797: 0.3965% ( 1) 00:08:23.782 5822.622 - 5847.828: 0.4081% ( 2) 00:08:23.782 5847.828 - 5873.034: 0.4198% ( 2) 00:08:23.782 5873.034 - 5898.240: 0.4431% ( 4) 00:08:23.782 5898.240 - 5923.446: 0.4606% ( 3) 00:08:23.782 5923.446 - 5948.652: 0.5422% ( 14) 00:08:23.782 5948.652 - 5973.858: 0.6063% ( 11) 00:08:23.782 5973.858 - 5999.065: 0.7463% ( 24) 00:08:23.782 5999.065 - 6024.271: 0.8629% ( 20) 00:08:23.782 6024.271 - 6049.477: 1.0903% ( 39) 00:08:23.782 6049.477 - 6074.683: 1.5567% ( 80) 00:08:23.782 6074.683 - 6099.889: 2.1514% ( 102) 00:08:23.782 6099.889 - 6125.095: 2.4312% ( 48) 00:08:23.782 6125.095 - 6150.302: 2.5711% ( 24) 00:08:23.782 6150.302 - 6175.508: 2.8918% ( 55) 00:08:23.782 6175.508 - 6200.714: 3.0259% ( 23) 00:08:23.782 6200.714 - 6225.920: 3.1775% ( 26) 00:08:23.782 6225.920 - 6251.126: 3.6964% ( 89) 00:08:23.782 6251.126 - 6276.332: 4.1803% ( 83) 00:08:23.782 6276.332 - 6301.538: 4.5243% ( 59) 00:08:23.782 6301.538 - 6326.745: 4.6875% ( 28) 00:08:23.782 6326.745 - 6351.951: 4.8099% ( 21) 00:08:23.782 6351.951 - 6377.157: 4.9907% ( 31) 00:08:23.782 6377.157 - 6402.363: 5.1947% ( 35) 00:08:23.782 6402.363 - 6427.569: 5.5212% ( 56) 00:08:23.782 6427.569 - 6452.775: 5.9002% ( 65) 00:08:23.782 6452.775 - 6503.188: 6.8155% ( 157) 00:08:23.782 6503.188 - 6553.600: 7.8417% ( 176) 00:08:23.782 6553.600 - 6604.012: 9.4216% ( 271) 00:08:23.782 6604.012 - 6654.425: 12.1618% ( 470) 00:08:23.782 6654.425 - 6704.837: 15.0478% ( 495) 00:08:23.782 6704.837 - 6755.249: 18.5459% ( 600) 00:08:23.782 6755.249 - 6805.662: 23.1926% ( 797) 00:08:23.782 6805.662 - 6856.074: 28.3699% ( 888) 00:08:23.782 6856.074 - 6906.486: 34.4625% ( 1045) 00:08:23.782 6906.486 - 6956.898: 41.5462% ( 1215) 00:08:23.782 6956.898 - 7007.311: 49.3820% ( 1344) 00:08:23.782 7007.311 - 7057.723: 56.1800% ( 1166) 00:08:23.782 7057.723 - 7108.135: 61.9811% ( 995) 00:08:23.782 7108.135 - 7158.548: 67.2516% ( 904) 00:08:23.782 7158.548 - 7208.960: 71.4319% ( 717) 00:08:23.782 7208.960 - 7259.372: 74.6035% ( 544) 00:08:23.782 7259.372 - 7309.785: 77.3263% ( 467) 00:08:23.782 7309.785 - 7360.197: 78.9646% ( 281) 00:08:23.782 7360.197 - 7410.609: 80.5562% ( 273) 00:08:23.782 7410.609 - 7461.022: 82.1304% ( 270) 00:08:23.782 7461.022 - 7511.434: 83.4014% ( 218) 00:08:23.782 7511.434 - 7561.846: 84.1010% ( 120) 00:08:23.782 7561.846 - 7612.258: 84.7015% ( 103) 00:08:23.782 7612.258 - 7662.671: 85.3895% ( 118) 00:08:23.782 7662.671 - 7713.083: 85.9900% ( 103) 00:08:23.782 7713.083 - 7763.495: 86.4972% ( 87) 00:08:23.782 7763.495 - 7813.908: 86.9811% ( 83) 00:08:23.782 7813.908 - 7864.320: 87.3542% ( 64) 00:08:23.782 7864.320 - 7914.732: 87.5933% ( 41) 00:08:23.782 7914.732 - 7965.145: 87.9722% ( 65) 00:08:23.782 7965.145 - 8015.557: 88.3046% ( 57) 00:08:23.782 8015.557 - 8065.969: 88.6544% ( 60) 00:08:23.782 8065.969 - 8116.382: 88.9226% ( 46) 00:08:23.782 8116.382 - 8166.794: 89.1558% ( 40) 00:08:23.782 8166.794 - 8217.206: 89.4123% ( 44) 00:08:23.782 8217.206 - 8267.618: 89.5989% ( 32) 00:08:23.782 8267.618 - 8318.031: 89.9604% ( 62) 00:08:23.782 8318.031 - 8368.443: 90.0536% ( 16) 00:08:23.782 8368.443 - 8418.855: 90.1877% ( 23) 00:08:23.782 8418.855 - 8469.268: 90.3043% ( 20) 00:08:23.782 8469.268 - 8519.680: 90.4618% ( 27) 00:08:23.782 8519.680 - 8570.092: 90.6425% ( 31) 00:08:23.782 8570.092 - 8620.505: 90.8874% ( 42) 00:08:23.782 8620.505 - 8670.917: 91.0798% ( 33) 00:08:23.782 8670.917 - 8721.329: 91.1905% ( 19) 00:08:23.782 8721.329 - 8771.742: 91.2838% ( 16) 00:08:23.782 8771.742 - 8822.154: 91.3888% ( 18) 00:08:23.782 8822.154 - 8872.566: 91.5287% ( 24) 00:08:23.782 8872.566 - 8922.978: 91.6395% ( 19) 00:08:23.782 8922.978 - 8973.391: 91.7211% ( 14) 00:08:23.782 8973.391 - 9023.803: 91.7794% ( 10) 00:08:23.782 9023.803 - 9074.215: 91.8377% ( 10) 00:08:23.782 9074.215 - 9124.628: 91.8960% ( 10) 00:08:23.782 9124.628 - 9175.040: 92.0184% ( 21) 00:08:23.782 9175.040 - 9225.452: 92.1234% ( 18) 00:08:23.782 9225.452 - 9275.865: 92.2458% ( 21) 00:08:23.782 9275.865 - 9326.277: 92.3624% ( 20) 00:08:23.782 9326.277 - 9376.689: 92.4149% ( 9) 00:08:23.782 9376.689 - 9427.102: 92.4499% ( 6) 00:08:23.782 9427.102 - 9477.514: 92.4965% ( 8) 00:08:23.782 9477.514 - 9527.926: 92.5198% ( 4) 00:08:23.782 9527.926 - 9578.338: 92.5665% ( 8) 00:08:23.782 9578.338 - 9628.751: 92.6073% ( 7) 00:08:23.782 9628.751 - 9679.163: 92.6539% ( 8) 00:08:23.782 9679.163 - 9729.575: 92.7006% ( 8) 00:08:23.782 9729.575 - 9779.988: 92.7530% ( 9) 00:08:23.782 9779.988 - 9830.400: 92.7938% ( 7) 00:08:23.782 9830.400 - 9880.812: 92.8521% ( 10) 00:08:23.782 9880.812 - 9931.225: 92.9279% ( 13) 00:08:23.782 9931.225 - 9981.637: 93.0737% ( 25) 00:08:23.782 9981.637 - 10032.049: 93.2311% ( 27) 00:08:23.782 10032.049 - 10082.462: 93.4060% ( 30) 00:08:23.782 10082.462 - 10132.874: 93.5576% ( 26) 00:08:23.782 10132.874 - 10183.286: 93.7150% ( 27) 00:08:23.782 10183.286 - 10233.698: 93.8783% ( 28) 00:08:23.782 10233.698 - 10284.111: 94.0124% ( 23) 00:08:23.782 10284.111 - 10334.523: 94.1698% ( 27) 00:08:23.782 10334.523 - 10384.935: 94.3563% ( 32) 00:08:23.782 10384.935 - 10435.348: 94.5837% ( 39) 00:08:23.782 10435.348 - 10485.760: 94.8344% ( 43) 00:08:23.782 10485.760 - 10536.172: 95.0268% ( 33) 00:08:23.782 10536.172 - 10586.585: 95.1201% ( 16) 00:08:23.782 10586.585 - 10636.997: 95.2367% ( 20) 00:08:23.782 10636.997 - 10687.409: 95.3300% ( 16) 00:08:23.782 10687.409 - 10737.822: 95.4058% ( 13) 00:08:23.782 10737.822 - 10788.234: 95.4699% ( 11) 00:08:23.782 10788.234 - 10838.646: 95.5399% ( 12) 00:08:23.782 10838.646 - 10889.058: 95.6098% ( 12) 00:08:23.782 10889.058 - 10939.471: 95.6798% ( 12) 00:08:23.782 10939.471 - 10989.883: 95.7614% ( 14) 00:08:23.782 10989.883 - 11040.295: 95.8256% ( 11) 00:08:23.782 11040.295 - 11090.708: 95.8955% ( 12) 00:08:23.782 11090.708 - 11141.120: 95.9480% ( 9) 00:08:23.782 11141.120 - 11191.532: 96.0413% ( 16) 00:08:23.782 11191.532 - 11241.945: 96.1346% ( 16) 00:08:23.782 11241.945 - 11292.357: 96.1870% ( 9) 00:08:23.782 11292.357 - 11342.769: 96.2220% ( 6) 00:08:23.782 11342.769 - 11393.182: 96.2978% ( 13) 00:08:23.782 11393.182 - 11443.594: 96.3503% ( 9) 00:08:23.782 11443.594 - 11494.006: 96.3853% ( 6) 00:08:23.782 11494.006 - 11544.418: 96.4086% ( 4) 00:08:23.782 11544.418 - 11594.831: 96.4261% ( 3) 00:08:23.782 11594.831 - 11645.243: 96.4727% ( 8) 00:08:23.782 11645.243 - 11695.655: 96.5252% ( 9) 00:08:23.782 11695.655 - 11746.068: 96.5777% ( 9) 00:08:23.782 11746.068 - 11796.480: 96.6301% ( 9) 00:08:23.782 11796.480 - 11846.892: 96.7875% ( 27) 00:08:23.783 11846.892 - 11897.305: 96.8342% ( 8) 00:08:23.783 11897.305 - 11947.717: 96.8692% ( 6) 00:08:23.783 11947.717 - 11998.129: 96.9042% ( 6) 00:08:23.783 11998.129 - 12048.542: 96.9333% ( 5) 00:08:23.783 12048.542 - 12098.954: 96.9683% ( 6) 00:08:23.783 12098.954 - 12149.366: 97.0033% ( 6) 00:08:23.783 12149.366 - 12199.778: 97.0382% ( 6) 00:08:23.783 12199.778 - 12250.191: 97.0907% ( 9) 00:08:23.783 12250.191 - 12300.603: 97.2073% ( 20) 00:08:23.783 12300.603 - 12351.015: 97.3006% ( 16) 00:08:23.783 12351.015 - 12401.428: 97.3356% ( 6) 00:08:23.783 12401.428 - 12451.840: 97.3647% ( 5) 00:08:23.783 12451.840 - 12502.252: 97.3822% ( 3) 00:08:23.783 12502.252 - 12552.665: 97.4289% ( 8) 00:08:23.783 12552.665 - 12603.077: 97.5630% ( 23) 00:08:23.783 12603.077 - 12653.489: 97.5746% ( 2) 00:08:23.783 12653.489 - 12703.902: 97.5979% ( 4) 00:08:23.783 12703.902 - 12754.314: 97.6213% ( 4) 00:08:23.783 12754.314 - 12804.726: 97.6388% ( 3) 00:08:23.783 12804.726 - 12855.138: 97.6562% ( 3) 00:08:23.783 12855.138 - 12905.551: 97.6796% ( 4) 00:08:23.783 12905.551 - 13006.375: 97.7087% ( 5) 00:08:23.783 13006.375 - 13107.200: 97.7379% ( 5) 00:08:23.783 13107.200 - 13208.025: 97.7554% ( 3) 00:08:23.783 13308.849 - 13409.674: 97.8137% ( 10) 00:08:23.783 13409.674 - 13510.498: 97.8661% ( 9) 00:08:23.783 13510.498 - 13611.323: 98.0527% ( 32) 00:08:23.783 13611.323 - 13712.148: 98.0877% ( 6) 00:08:23.783 13712.148 - 13812.972: 98.1227% ( 6) 00:08:23.783 13812.972 - 13913.797: 98.1343% ( 2) 00:08:23.783 14014.622 - 14115.446: 98.1810% ( 8) 00:08:23.783 14115.446 - 14216.271: 98.2218% ( 7) 00:08:23.783 14216.271 - 14317.095: 98.2626% ( 7) 00:08:23.783 14317.095 - 14417.920: 98.3034% ( 7) 00:08:23.783 14417.920 - 14518.745: 98.3384% ( 6) 00:08:23.783 14518.745 - 14619.569: 98.3909% ( 9) 00:08:23.783 14619.569 - 14720.394: 98.4958% ( 18) 00:08:23.783 14720.394 - 14821.218: 98.7582% ( 45) 00:08:23.783 14821.218 - 14922.043: 98.8281% ( 12) 00:08:23.783 14922.043 - 15022.868: 98.9097% ( 14) 00:08:23.783 15022.868 - 15123.692: 98.9914% ( 14) 00:08:23.783 15123.692 - 15224.517: 99.1954% ( 35) 00:08:23.783 15224.517 - 15325.342: 99.2304% ( 6) 00:08:23.783 15325.342 - 15426.166: 99.2537% ( 4) 00:08:23.783 17039.360 - 17140.185: 99.2596% ( 1) 00:08:23.783 17341.834 - 17442.658: 99.2829% ( 4) 00:08:23.783 17442.658 - 17543.483: 99.3120% ( 5) 00:08:23.783 17543.483 - 17644.308: 99.3412% ( 5) 00:08:23.783 17644.308 - 17745.132: 99.3762% ( 6) 00:08:23.783 17745.132 - 17845.957: 99.3995% ( 4) 00:08:23.783 17845.957 - 17946.782: 99.4286% ( 5) 00:08:23.783 17946.782 - 18047.606: 99.4636% ( 6) 00:08:23.783 18047.606 - 18148.431: 99.4986% ( 6) 00:08:23.783 18148.431 - 18249.255: 99.5161% ( 3) 00:08:23.783 18249.255 - 18350.080: 99.5394% ( 4) 00:08:23.783 18350.080 - 18450.905: 99.5569% ( 3) 00:08:23.783 18450.905 - 18551.729: 99.5744% ( 3) 00:08:23.783 18551.729 - 18652.554: 99.5919% ( 3) 00:08:23.783 18652.554 - 18753.378: 99.6152% ( 4) 00:08:23.783 18753.378 - 18854.203: 99.6269% ( 2) 00:08:23.783 23794.609 - 23895.434: 99.6560% ( 5) 00:08:23.783 23895.434 - 23996.258: 99.6910% ( 6) 00:08:23.783 23996.258 - 24097.083: 99.7493% ( 10) 00:08:23.783 24097.083 - 24197.908: 99.8134% ( 11) 00:08:23.783 24197.908 - 24298.732: 99.8368% ( 4) 00:08:23.783 24298.732 - 24399.557: 99.8659% ( 5) 00:08:23.783 24399.557 - 24500.382: 99.9067% ( 7) 00:08:23.783 24500.382 - 24601.206: 99.9475% ( 7) 00:08:23.783 24601.206 - 24702.031: 99.9650% ( 3) 00:08:23.783 24702.031 - 24802.855: 99.9825% ( 3) 00:08:23.783 24802.855 - 24903.680: 100.0000% ( 3) 00:08:23.783 00:08:23.783 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.783 ============================================================================== 00:08:23.783 Range in us Cumulative IO count 00:08:23.783 4335.458 - 4360.665: 0.0058% ( 1) 00:08:23.783 4360.665 - 4385.871: 0.0175% ( 2) 00:08:23.783 4385.871 - 4411.077: 0.0292% ( 2) 00:08:23.783 4411.077 - 4436.283: 0.0641% ( 6) 00:08:23.783 4436.283 - 4461.489: 0.0816% ( 3) 00:08:23.783 4461.489 - 4486.695: 0.1632% ( 14) 00:08:23.783 4486.695 - 4511.902: 0.1866% ( 4) 00:08:23.783 4511.902 - 4537.108: 0.2157% ( 5) 00:08:23.783 4537.108 - 4562.314: 0.2390% ( 4) 00:08:23.783 4562.314 - 4587.520: 0.2449% ( 1) 00:08:23.783 4587.520 - 4612.726: 0.2624% ( 3) 00:08:23.783 4612.726 - 4637.932: 0.2857% ( 4) 00:08:23.783 4637.932 - 4663.138: 0.3090% ( 4) 00:08:23.783 4663.138 - 4688.345: 0.3323% ( 4) 00:08:23.783 4688.345 - 4713.551: 0.3440% ( 2) 00:08:23.783 4713.551 - 4738.757: 0.3556% ( 2) 00:08:23.783 4738.757 - 4763.963: 0.3673% ( 2) 00:08:23.783 4763.963 - 4789.169: 0.3731% ( 1) 00:08:23.783 5520.148 - 5545.354: 0.3790% ( 1) 00:08:23.783 5646.178 - 5671.385: 0.3906% ( 2) 00:08:23.783 5671.385 - 5696.591: 0.3965% ( 1) 00:08:23.783 5772.209 - 5797.415: 0.4198% ( 4) 00:08:23.783 5797.415 - 5822.622: 0.4548% ( 6) 00:08:23.783 5822.622 - 5847.828: 0.5247% ( 12) 00:08:23.783 5847.828 - 5873.034: 0.6180% ( 16) 00:08:23.783 5873.034 - 5898.240: 0.6996% ( 14) 00:08:23.783 5898.240 - 5923.446: 0.7463% ( 8) 00:08:23.783 5923.446 - 5948.652: 0.7929% ( 8) 00:08:23.783 5948.652 - 5973.858: 0.8570% ( 11) 00:08:23.783 5973.858 - 5999.065: 0.9503% ( 16) 00:08:23.783 5999.065 - 6024.271: 1.0961% ( 25) 00:08:23.783 6024.271 - 6049.477: 1.3410% ( 42) 00:08:23.783 6049.477 - 6074.683: 1.7083% ( 63) 00:08:23.783 6074.683 - 6099.889: 1.9881% ( 48) 00:08:23.783 6099.889 - 6125.095: 2.5770% ( 101) 00:08:23.783 6125.095 - 6150.302: 2.8918% ( 54) 00:08:23.783 6150.302 - 6175.508: 3.1600% ( 46) 00:08:23.783 6175.508 - 6200.714: 3.2999% ( 24) 00:08:23.783 6200.714 - 6225.920: 3.6322% ( 57) 00:08:23.783 6225.920 - 6251.126: 3.8479% ( 37) 00:08:23.783 6251.126 - 6276.332: 4.1045% ( 44) 00:08:23.783 6276.332 - 6301.538: 4.5301% ( 73) 00:08:23.783 6301.538 - 6326.745: 4.6409% ( 19) 00:08:23.783 6326.745 - 6351.951: 4.7283% ( 15) 00:08:23.783 6351.951 - 6377.157: 4.8857% ( 27) 00:08:23.783 6377.157 - 6402.363: 5.0956% ( 36) 00:08:23.783 6402.363 - 6427.569: 5.5387% ( 76) 00:08:23.783 6427.569 - 6452.775: 5.7952% ( 44) 00:08:23.783 6452.775 - 6503.188: 6.6348% ( 144) 00:08:23.783 6503.188 - 6553.600: 8.0399% ( 241) 00:08:23.783 6553.600 - 6604.012: 9.4858% ( 248) 00:08:23.783 6604.012 - 6654.425: 11.9694% ( 426) 00:08:23.783 6654.425 - 6704.837: 14.8554% ( 495) 00:08:23.783 6704.837 - 6755.249: 18.0154% ( 542) 00:08:23.783 6755.249 - 6805.662: 22.0499% ( 692) 00:08:23.783 6805.662 - 6856.074: 27.0289% ( 854) 00:08:23.783 6856.074 - 6906.486: 33.8095% ( 1163) 00:08:23.783 6906.486 - 6956.898: 41.7852% ( 1368) 00:08:23.783 6956.898 - 7007.311: 50.2857% ( 1458) 00:08:23.783 7007.311 - 7057.723: 57.4977% ( 1237) 00:08:23.783 7057.723 - 7108.135: 63.5611% ( 1040) 00:08:23.783 7108.135 - 7158.548: 68.8025% ( 899) 00:08:23.783 7158.548 - 7208.960: 72.5688% ( 646) 00:08:23.783 7208.960 - 7259.372: 75.1399% ( 441) 00:08:23.783 7259.372 - 7309.785: 77.5245% ( 409) 00:08:23.783 7309.785 - 7360.197: 79.6175% ( 359) 00:08:23.783 7360.197 - 7410.609: 80.9118% ( 222) 00:08:23.783 7410.609 - 7461.022: 82.0196% ( 190) 00:08:23.783 7461.022 - 7511.434: 83.0632% ( 179) 00:08:23.783 7511.434 - 7561.846: 83.7104% ( 111) 00:08:23.783 7561.846 - 7612.258: 84.2992% ( 101) 00:08:23.783 7612.258 - 7662.671: 84.8531% ( 95) 00:08:23.783 7662.671 - 7713.083: 85.3020% ( 77) 00:08:23.783 7713.083 - 7763.495: 85.9550% ( 112) 00:08:23.783 7763.495 - 7813.908: 86.6430% ( 118) 00:08:23.783 7813.908 - 7864.320: 87.2318% ( 101) 00:08:23.783 7864.320 - 7914.732: 87.6807% ( 77) 00:08:23.783 7914.732 - 7965.145: 88.0131% ( 57) 00:08:23.783 7965.145 - 8015.557: 88.2638% ( 43) 00:08:23.783 8015.557 - 8065.969: 88.4911% ( 39) 00:08:23.783 8065.969 - 8116.382: 88.8060% ( 54) 00:08:23.783 8116.382 - 8166.794: 89.1791% ( 64) 00:08:23.784 8166.794 - 8217.206: 89.3832% ( 35) 00:08:23.784 8217.206 - 8267.618: 89.6164% ( 40) 00:08:23.784 8267.618 - 8318.031: 90.1003% ( 83) 00:08:23.784 8318.031 - 8368.443: 90.2927% ( 33) 00:08:23.784 8368.443 - 8418.855: 90.5609% ( 46) 00:08:23.784 8418.855 - 8469.268: 90.7941% ( 40) 00:08:23.784 8469.268 - 8519.680: 90.8699% ( 13) 00:08:23.784 8519.680 - 8570.092: 90.9340% ( 11) 00:08:23.784 8570.092 - 8620.505: 91.0098% ( 13) 00:08:23.784 8620.505 - 8670.917: 91.0914% ( 14) 00:08:23.784 8670.917 - 8721.329: 91.1905% ( 17) 00:08:23.784 8721.329 - 8771.742: 91.3538% ( 28) 00:08:23.784 8771.742 - 8822.154: 91.4062% ( 9) 00:08:23.784 8822.154 - 8872.566: 91.4646% ( 10) 00:08:23.784 8872.566 - 8922.978: 91.5170% ( 9) 00:08:23.784 8922.978 - 8973.391: 91.5753% ( 10) 00:08:23.784 8973.391 - 9023.803: 91.6861% ( 19) 00:08:23.784 9023.803 - 9074.215: 91.8668% ( 31) 00:08:23.784 9074.215 - 9124.628: 91.9601% ( 16) 00:08:23.784 9124.628 - 9175.040: 92.0476% ( 15) 00:08:23.784 9175.040 - 9225.452: 92.1700% ( 21) 00:08:23.784 9225.452 - 9275.865: 92.2808% ( 19) 00:08:23.784 9275.865 - 9326.277: 92.3507% ( 12) 00:08:23.784 9326.277 - 9376.689: 92.4149% ( 11) 00:08:23.784 9376.689 - 9427.102: 92.4732% ( 10) 00:08:23.784 9427.102 - 9477.514: 92.5665% ( 16) 00:08:23.784 9477.514 - 9527.926: 92.7239% ( 27) 00:08:23.784 9527.926 - 9578.338: 92.9163% ( 33) 00:08:23.784 9578.338 - 9628.751: 93.0795% ( 28) 00:08:23.784 9628.751 - 9679.163: 93.1728% ( 16) 00:08:23.784 9679.163 - 9729.575: 93.2603% ( 15) 00:08:23.784 9729.575 - 9779.988: 93.3361% ( 13) 00:08:23.784 9779.988 - 9830.400: 93.4410% ( 18) 00:08:23.784 9830.400 - 9880.812: 93.5343% ( 16) 00:08:23.784 9880.812 - 9931.225: 93.6742% ( 24) 00:08:23.784 9931.225 - 9981.637: 93.7383% ( 11) 00:08:23.784 9981.637 - 10032.049: 93.7908% ( 9) 00:08:23.784 10032.049 - 10082.462: 93.8783% ( 15) 00:08:23.784 10082.462 - 10132.874: 94.0124% ( 23) 00:08:23.784 10132.874 - 10183.286: 94.1406% ( 22) 00:08:23.784 10183.286 - 10233.698: 94.2164% ( 13) 00:08:23.784 10233.698 - 10284.111: 94.2631% ( 8) 00:08:23.784 10284.111 - 10334.523: 94.3389% ( 13) 00:08:23.784 10334.523 - 10384.935: 94.4088% ( 12) 00:08:23.784 10384.935 - 10435.348: 94.4846% ( 13) 00:08:23.784 10435.348 - 10485.760: 94.5546% ( 12) 00:08:23.784 10485.760 - 10536.172: 94.6129% ( 10) 00:08:23.784 10536.172 - 10586.585: 94.6770% ( 11) 00:08:23.784 10586.585 - 10636.997: 94.7353% ( 10) 00:08:23.784 10636.997 - 10687.409: 94.7994% ( 11) 00:08:23.784 10687.409 - 10737.822: 94.8986% ( 17) 00:08:23.784 10737.822 - 10788.234: 94.9452% ( 8) 00:08:23.784 10788.234 - 10838.646: 95.0093% ( 11) 00:08:23.784 10838.646 - 10889.058: 95.0676% ( 10) 00:08:23.784 10889.058 - 10939.471: 95.1376% ( 12) 00:08:23.784 10939.471 - 10989.883: 95.2309% ( 16) 00:08:23.784 10989.883 - 11040.295: 95.3242% ( 16) 00:08:23.784 11040.295 - 11090.708: 95.3825% ( 10) 00:08:23.784 11090.708 - 11141.120: 95.4699% ( 15) 00:08:23.784 11141.120 - 11191.532: 95.5340% ( 11) 00:08:23.784 11191.532 - 11241.945: 95.6098% ( 13) 00:08:23.784 11241.945 - 11292.357: 95.6623% ( 9) 00:08:23.784 11292.357 - 11342.769: 95.7381% ( 13) 00:08:23.784 11342.769 - 11393.182: 95.7847% ( 8) 00:08:23.784 11393.182 - 11443.594: 95.8139% ( 5) 00:08:23.784 11443.594 - 11494.006: 95.8605% ( 8) 00:08:23.784 11494.006 - 11544.418: 95.9014% ( 7) 00:08:23.784 11544.418 - 11594.831: 95.9422% ( 7) 00:08:23.784 11594.831 - 11645.243: 95.9888% ( 8) 00:08:23.784 11645.243 - 11695.655: 96.0471% ( 10) 00:08:23.784 11695.655 - 11746.068: 96.0938% ( 8) 00:08:23.784 11746.068 - 11796.480: 96.1346% ( 7) 00:08:23.784 11796.480 - 11846.892: 96.1812% ( 8) 00:08:23.784 11846.892 - 11897.305: 96.2278% ( 8) 00:08:23.784 11897.305 - 11947.717: 96.3328% ( 18) 00:08:23.784 11947.717 - 11998.129: 96.4261% ( 16) 00:08:23.784 11998.129 - 12048.542: 96.4960% ( 12) 00:08:23.784 12048.542 - 12098.954: 96.6768% ( 31) 00:08:23.784 12098.954 - 12149.366: 96.7934% ( 20) 00:08:23.784 12149.366 - 12199.778: 96.9508% ( 27) 00:08:23.784 12199.778 - 12250.191: 97.1140% ( 28) 00:08:23.784 12250.191 - 12300.603: 97.3239% ( 36) 00:08:23.784 12300.603 - 12351.015: 97.3939% ( 12) 00:08:23.784 12351.015 - 12401.428: 97.4755% ( 14) 00:08:23.784 12401.428 - 12451.840: 97.5455% ( 12) 00:08:23.784 12451.840 - 12502.252: 97.6154% ( 12) 00:08:23.784 12502.252 - 12552.665: 97.6854% ( 12) 00:08:23.784 12552.665 - 12603.077: 97.7554% ( 12) 00:08:23.784 12603.077 - 12653.489: 97.8078% ( 9) 00:08:23.784 12653.489 - 12703.902: 97.8603% ( 9) 00:08:23.784 12703.902 - 12754.314: 97.9128% ( 9) 00:08:23.784 12754.314 - 12804.726: 97.9536% ( 7) 00:08:23.784 12804.726 - 12855.138: 97.9886% ( 6) 00:08:23.784 12855.138 - 12905.551: 98.0352% ( 8) 00:08:23.784 12905.551 - 13006.375: 98.0819% ( 8) 00:08:23.784 13006.375 - 13107.200: 98.1227% ( 7) 00:08:23.784 13107.200 - 13208.025: 98.1343% ( 2) 00:08:23.784 13308.849 - 13409.674: 98.1810% ( 8) 00:08:23.784 13409.674 - 13510.498: 98.2509% ( 12) 00:08:23.784 13510.498 - 13611.323: 98.4317% ( 31) 00:08:23.784 13611.323 - 13712.148: 98.4667% ( 6) 00:08:23.784 13712.148 - 13812.972: 98.5016% ( 6) 00:08:23.784 13812.972 - 13913.797: 98.5075% ( 1) 00:08:23.784 14317.095 - 14417.920: 98.5133% ( 1) 00:08:23.784 14417.920 - 14518.745: 98.5366% ( 4) 00:08:23.784 14518.745 - 14619.569: 98.5833% ( 8) 00:08:23.784 14619.569 - 14720.394: 98.7931% ( 36) 00:08:23.784 14720.394 - 14821.218: 98.8398% ( 8) 00:08:23.784 14821.218 - 14922.043: 98.8806% ( 7) 00:08:23.784 14922.043 - 15022.868: 98.8923% ( 2) 00:08:23.784 15022.868 - 15123.692: 98.9564% ( 11) 00:08:23.784 15123.692 - 15224.517: 99.0963% ( 24) 00:08:23.784 15224.517 - 15325.342: 99.2013% ( 18) 00:08:23.784 15325.342 - 15426.166: 99.2304% ( 5) 00:08:23.784 15426.166 - 15526.991: 99.2537% ( 4) 00:08:23.784 16938.535 - 17039.360: 99.2596% ( 1) 00:08:23.784 17241.009 - 17341.834: 99.2829% ( 4) 00:08:23.784 17341.834 - 17442.658: 99.3179% ( 6) 00:08:23.784 17442.658 - 17543.483: 99.3470% ( 5) 00:08:23.784 17543.483 - 17644.308: 99.3703% ( 4) 00:08:23.784 17644.308 - 17745.132: 99.3937% ( 4) 00:08:23.784 17745.132 - 17845.957: 99.4286% ( 6) 00:08:23.784 17845.957 - 17946.782: 99.4636% ( 6) 00:08:23.784 17946.782 - 18047.606: 99.4869% ( 4) 00:08:23.784 18047.606 - 18148.431: 99.5103% ( 4) 00:08:23.784 18148.431 - 18249.255: 99.5278% ( 3) 00:08:23.784 18249.255 - 18350.080: 99.5511% ( 4) 00:08:23.784 18350.080 - 18450.905: 99.5686% ( 3) 00:08:23.784 18450.905 - 18551.729: 99.5861% ( 3) 00:08:23.784 18551.729 - 18652.554: 99.6094% ( 4) 00:08:23.784 18652.554 - 18753.378: 99.6269% ( 3) 00:08:23.784 23592.960 - 23693.785: 99.6502% ( 4) 00:08:23.784 23693.785 - 23794.609: 99.7318% ( 14) 00:08:23.784 23794.609 - 23895.434: 99.8659% ( 23) 00:08:23.784 23895.434 - 23996.258: 99.8892% ( 4) 00:08:23.784 23996.258 - 24097.083: 99.9125% ( 4) 00:08:23.784 24097.083 - 24197.908: 99.9300% ( 3) 00:08:23.784 24197.908 - 24298.732: 99.9534% ( 4) 00:08:23.784 24298.732 - 24399.557: 99.9825% ( 5) 00:08:23.784 24399.557 - 24500.382: 100.0000% ( 3) 00:08:23.784 00:08:23.784 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.784 ============================================================================== 00:08:23.784 Range in us Cumulative IO count 00:08:23.784 4058.191 - 4083.397: 0.0058% ( 1) 00:08:23.784 4133.809 - 4159.015: 0.0117% ( 1) 00:08:23.784 4159.015 - 4184.222: 0.0233% ( 2) 00:08:23.784 4184.222 - 4209.428: 0.0350% ( 2) 00:08:23.784 4209.428 - 4234.634: 0.0525% ( 3) 00:08:23.784 4234.634 - 4259.840: 0.0641% ( 2) 00:08:23.784 4259.840 - 4285.046: 0.0875% ( 4) 00:08:23.784 4285.046 - 4310.252: 0.1166% ( 5) 00:08:23.784 4310.252 - 4335.458: 0.1516% ( 6) 00:08:23.784 4335.458 - 4360.665: 0.2390% ( 15) 00:08:23.784 4360.665 - 4385.871: 0.2682% ( 5) 00:08:23.784 4385.871 - 4411.077: 0.2973% ( 5) 00:08:23.784 4411.077 - 4436.283: 0.3207% ( 4) 00:08:23.784 4436.283 - 4461.489: 0.3323% ( 2) 00:08:23.784 4461.489 - 4486.695: 0.3440% ( 2) 00:08:23.784 4486.695 - 4511.902: 0.3556% ( 2) 00:08:23.784 4511.902 - 4537.108: 0.3673% ( 2) 00:08:23.784 4537.108 - 4562.314: 0.3731% ( 1) 00:08:23.784 5520.148 - 5545.354: 0.3790% ( 1) 00:08:23.784 5545.354 - 5570.560: 0.4023% ( 4) 00:08:23.784 5570.560 - 5595.766: 0.4198% ( 3) 00:08:23.784 5595.766 - 5620.972: 0.4489% ( 5) 00:08:23.784 5620.972 - 5646.178: 0.5131% ( 11) 00:08:23.784 5646.178 - 5671.385: 0.5947% ( 14) 00:08:23.784 5671.385 - 5696.591: 0.6180% ( 4) 00:08:23.784 5696.591 - 5721.797: 0.6297% ( 2) 00:08:23.784 5721.797 - 5747.003: 0.6472% ( 3) 00:08:23.784 5747.003 - 5772.209: 0.6588% ( 2) 00:08:23.784 5772.209 - 5797.415: 0.6705% ( 2) 00:08:23.784 5797.415 - 5822.622: 0.6821% ( 2) 00:08:23.784 5822.622 - 5847.828: 0.6938% ( 2) 00:08:23.784 5847.828 - 5873.034: 0.7229% ( 5) 00:08:23.784 5873.034 - 5898.240: 0.7579% ( 6) 00:08:23.784 5898.240 - 5923.446: 0.8104% ( 9) 00:08:23.784 5923.446 - 5948.652: 0.8570% ( 8) 00:08:23.784 5948.652 - 5973.858: 0.9387% ( 14) 00:08:23.784 5973.858 - 5999.065: 1.0145% ( 13) 00:08:23.784 5999.065 - 6024.271: 1.1252% ( 19) 00:08:23.784 6024.271 - 6049.477: 1.3293% ( 35) 00:08:23.784 6049.477 - 6074.683: 1.6266% ( 51) 00:08:23.784 6074.683 - 6099.889: 1.9706% ( 59) 00:08:23.784 6099.889 - 6125.095: 2.5128% ( 93) 00:08:23.785 6125.095 - 6150.302: 2.7985% ( 49) 00:08:23.785 6150.302 - 6175.508: 2.9268% ( 22) 00:08:23.785 6175.508 - 6200.714: 3.1308% ( 35) 00:08:23.785 6200.714 - 6225.920: 3.4282% ( 51) 00:08:23.785 6225.920 - 6251.126: 3.7430% ( 54) 00:08:23.785 6251.126 - 6276.332: 4.0986% ( 61) 00:08:23.785 6276.332 - 6301.538: 4.3435% ( 42) 00:08:23.785 6301.538 - 6326.745: 4.7167% ( 64) 00:08:23.785 6326.745 - 6351.951: 4.8391% ( 21) 00:08:23.785 6351.951 - 6377.157: 4.9440% ( 18) 00:08:23.785 6377.157 - 6402.363: 5.2997% ( 61) 00:08:23.785 6402.363 - 6427.569: 5.5154% ( 37) 00:08:23.785 6427.569 - 6452.775: 5.7894% ( 47) 00:08:23.785 6452.775 - 6503.188: 6.5882% ( 137) 00:08:23.785 6503.188 - 6553.600: 7.7950% ( 207) 00:08:23.785 6553.600 - 6604.012: 9.1301% ( 229) 00:08:23.785 6604.012 - 6654.425: 11.5963% ( 423) 00:08:23.785 6654.425 - 6704.837: 14.0800% ( 426) 00:08:23.785 6704.837 - 6755.249: 17.5898% ( 602) 00:08:23.785 6755.249 - 6805.662: 22.1374% ( 780) 00:08:23.785 6805.662 - 6856.074: 27.3321% ( 891) 00:08:23.785 6856.074 - 6906.486: 34.0835% ( 1158) 00:08:23.785 6906.486 - 6956.898: 41.6453% ( 1297) 00:08:23.785 6956.898 - 7007.311: 50.9445% ( 1595) 00:08:23.785 7007.311 - 7057.723: 57.8067% ( 1177) 00:08:23.785 7057.723 - 7108.135: 63.3221% ( 946) 00:08:23.785 7108.135 - 7158.548: 68.1495% ( 828) 00:08:23.785 7158.548 - 7208.960: 72.4056% ( 730) 00:08:23.785 7208.960 - 7259.372: 75.5597% ( 541) 00:08:23.785 7259.372 - 7309.785: 78.0492% ( 427) 00:08:23.785 7309.785 - 7360.197: 80.0840% ( 349) 00:08:23.785 7360.197 - 7410.609: 81.2500% ( 200) 00:08:23.785 7410.609 - 7461.022: 82.0896% ( 144) 00:08:23.785 7461.022 - 7511.434: 82.8417% ( 129) 00:08:23.785 7511.434 - 7561.846: 83.6579% ( 140) 00:08:23.785 7561.846 - 7612.258: 84.5149% ( 147) 00:08:23.785 7612.258 - 7662.671: 84.8589% ( 59) 00:08:23.785 7662.671 - 7713.083: 85.2437% ( 66) 00:08:23.785 7713.083 - 7763.495: 85.8151% ( 98) 00:08:23.785 7763.495 - 7813.908: 86.2640% ( 77) 00:08:23.785 7813.908 - 7864.320: 86.6196% ( 61) 00:08:23.785 7864.320 - 7914.732: 86.9345% ( 54) 00:08:23.785 7914.732 - 7965.145: 87.2959% ( 62) 00:08:23.785 7965.145 - 8015.557: 87.8207% ( 90) 00:08:23.785 8015.557 - 8065.969: 88.1821% ( 62) 00:08:23.785 8065.969 - 8116.382: 88.4328% ( 43) 00:08:23.785 8116.382 - 8166.794: 88.7418% ( 53) 00:08:23.785 8166.794 - 8217.206: 88.9867% ( 42) 00:08:23.785 8217.206 - 8267.618: 89.2782% ( 50) 00:08:23.785 8267.618 - 8318.031: 89.7271% ( 77) 00:08:23.785 8318.031 - 8368.443: 89.9837% ( 44) 00:08:23.785 8368.443 - 8418.855: 90.3218% ( 58) 00:08:23.785 8418.855 - 8469.268: 90.5550% ( 40) 00:08:23.785 8469.268 - 8519.680: 90.7941% ( 41) 00:08:23.785 8519.680 - 8570.092: 91.0448% ( 43) 00:08:23.785 8570.092 - 8620.505: 91.2255% ( 31) 00:08:23.785 8620.505 - 8670.917: 91.3888% ( 28) 00:08:23.785 8670.917 - 8721.329: 91.5695% ( 31) 00:08:23.785 8721.329 - 8771.742: 91.7036% ( 23) 00:08:23.785 8771.742 - 8822.154: 91.8377% ( 23) 00:08:23.785 8822.154 - 8872.566: 91.9951% ( 27) 00:08:23.785 8872.566 - 8922.978: 92.1059% ( 19) 00:08:23.785 8922.978 - 8973.391: 92.1817% ( 13) 00:08:23.785 8973.391 - 9023.803: 92.2516% ( 12) 00:08:23.785 9023.803 - 9074.215: 92.3158% ( 11) 00:08:23.785 9074.215 - 9124.628: 92.4149% ( 17) 00:08:23.785 9124.628 - 9175.040: 92.5257% ( 19) 00:08:23.785 9175.040 - 9225.452: 92.7064% ( 31) 00:08:23.785 9225.452 - 9275.865: 92.9163% ( 36) 00:08:23.785 9275.865 - 9326.277: 93.0620% ( 25) 00:08:23.785 9326.277 - 9376.689: 93.1611% ( 17) 00:08:23.785 9376.689 - 9427.102: 93.2894% ( 22) 00:08:23.785 9427.102 - 9477.514: 93.3769% ( 15) 00:08:23.785 9477.514 - 9527.926: 93.4876% ( 19) 00:08:23.785 9527.926 - 9578.338: 93.5576% ( 12) 00:08:23.785 9578.338 - 9628.751: 93.6334% ( 13) 00:08:23.785 9628.751 - 9679.163: 93.6917% ( 10) 00:08:23.785 9679.163 - 9729.575: 93.7558% ( 11) 00:08:23.785 9729.575 - 9779.988: 93.8491% ( 16) 00:08:23.785 9779.988 - 9830.400: 94.0357% ( 32) 00:08:23.785 9830.400 - 9880.812: 94.1989% ( 28) 00:08:23.785 9880.812 - 9931.225: 94.3563% ( 27) 00:08:23.785 9931.225 - 9981.637: 94.4321% ( 13) 00:08:23.785 9981.637 - 10032.049: 94.4729% ( 7) 00:08:23.785 10032.049 - 10082.462: 94.5138% ( 7) 00:08:23.785 10082.462 - 10132.874: 94.5604% ( 8) 00:08:23.785 10132.874 - 10183.286: 94.5954% ( 6) 00:08:23.785 10183.286 - 10233.698: 94.6245% ( 5) 00:08:23.785 10233.698 - 10284.111: 94.6479% ( 4) 00:08:23.785 10284.111 - 10334.523: 94.6770% ( 5) 00:08:23.785 10334.523 - 10384.935: 94.6945% ( 3) 00:08:23.785 10384.935 - 10435.348: 94.7178% ( 4) 00:08:23.785 10435.348 - 10485.760: 94.7411% ( 4) 00:08:23.785 10485.760 - 10536.172: 94.7528% ( 2) 00:08:23.785 10536.172 - 10586.585: 94.7645% ( 2) 00:08:23.785 10586.585 - 10636.997: 94.7761% ( 2) 00:08:23.785 10788.234 - 10838.646: 94.7936% ( 3) 00:08:23.785 10838.646 - 10889.058: 94.8111% ( 3) 00:08:23.785 10889.058 - 10939.471: 94.8286% ( 3) 00:08:23.785 10939.471 - 10989.883: 94.8403% ( 2) 00:08:23.785 10989.883 - 11040.295: 94.8577% ( 3) 00:08:23.785 11040.295 - 11090.708: 94.8752% ( 3) 00:08:23.785 11090.708 - 11141.120: 94.8869% ( 2) 00:08:23.785 11141.120 - 11191.532: 94.9160% ( 5) 00:08:23.785 11191.532 - 11241.945: 94.9452% ( 5) 00:08:23.785 11241.945 - 11292.357: 94.9685% ( 4) 00:08:23.785 11292.357 - 11342.769: 95.0093% ( 7) 00:08:23.785 11342.769 - 11393.182: 95.0793% ( 12) 00:08:23.785 11393.182 - 11443.594: 95.1784% ( 17) 00:08:23.785 11443.594 - 11494.006: 95.2892% ( 19) 00:08:23.785 11494.006 - 11544.418: 95.4583% ( 29) 00:08:23.785 11544.418 - 11594.831: 95.5574% ( 17) 00:08:23.785 11594.831 - 11645.243: 95.6740% ( 20) 00:08:23.785 11645.243 - 11695.655: 95.7731% ( 17) 00:08:23.785 11695.655 - 11746.068: 95.8897% ( 20) 00:08:23.785 11746.068 - 11796.480: 96.0646% ( 30) 00:08:23.785 11796.480 - 11846.892: 96.1929% ( 22) 00:08:23.785 11846.892 - 11897.305: 96.2978% ( 18) 00:08:23.785 11897.305 - 11947.717: 96.3911% ( 16) 00:08:23.785 11947.717 - 11998.129: 96.4785% ( 15) 00:08:23.785 11998.129 - 12048.542: 96.5718% ( 16) 00:08:23.785 12048.542 - 12098.954: 96.7001% ( 22) 00:08:23.785 12098.954 - 12149.366: 96.8167% ( 20) 00:08:23.785 12149.366 - 12199.778: 96.9275% ( 19) 00:08:23.785 12199.778 - 12250.191: 97.0324% ( 18) 00:08:23.785 12250.191 - 12300.603: 97.1082% ( 13) 00:08:23.785 12300.603 - 12351.015: 97.1957% ( 15) 00:08:23.785 12351.015 - 12401.428: 97.2773% ( 14) 00:08:23.785 12401.428 - 12451.840: 97.3414% ( 11) 00:08:23.785 12451.840 - 12502.252: 97.4056% ( 11) 00:08:23.785 12502.252 - 12552.665: 97.5047% ( 17) 00:08:23.785 12552.665 - 12603.077: 97.6213% ( 20) 00:08:23.785 12603.077 - 12653.489: 97.7262% ( 18) 00:08:23.785 12653.489 - 12703.902: 97.7962% ( 12) 00:08:23.785 12703.902 - 12754.314: 97.9478% ( 26) 00:08:23.785 12754.314 - 12804.726: 97.9886% ( 7) 00:08:23.785 12804.726 - 12855.138: 98.0294% ( 7) 00:08:23.785 12855.138 - 12905.551: 98.0527% ( 4) 00:08:23.785 12905.551 - 13006.375: 98.0993% ( 8) 00:08:23.785 13006.375 - 13107.200: 98.1285% ( 5) 00:08:23.785 13107.200 - 13208.025: 98.1343% ( 1) 00:08:23.785 13208.025 - 13308.849: 98.1402% ( 1) 00:08:23.785 13308.849 - 13409.674: 98.2101% ( 12) 00:08:23.785 13409.674 - 13510.498: 98.4083% ( 34) 00:08:23.785 13510.498 - 13611.323: 98.4492% ( 7) 00:08:23.785 13611.323 - 13712.148: 98.4841% ( 6) 00:08:23.785 13712.148 - 13812.972: 98.5075% ( 4) 00:08:23.785 14216.271 - 14317.095: 98.5308% ( 4) 00:08:23.785 14317.095 - 14417.920: 98.5716% ( 7) 00:08:23.785 14417.920 - 14518.745: 98.6299% ( 10) 00:08:23.785 14518.745 - 14619.569: 98.7640% ( 23) 00:08:23.785 14619.569 - 14720.394: 99.0438% ( 48) 00:08:23.785 14720.394 - 14821.218: 99.0963% ( 9) 00:08:23.785 14821.218 - 14922.043: 99.1546% ( 10) 00:08:23.785 14922.043 - 15022.868: 99.1779% ( 4) 00:08:23.785 15022.868 - 15123.692: 99.1954% ( 3) 00:08:23.785 15123.692 - 15224.517: 99.2188% ( 4) 00:08:23.785 15224.517 - 15325.342: 99.2362% ( 3) 00:08:23.785 15325.342 - 15426.166: 99.2537% ( 3) 00:08:23.785 16535.237 - 16636.062: 99.2596% ( 1) 00:08:23.785 16938.535 - 17039.360: 99.2887% ( 5) 00:08:23.785 17039.360 - 17140.185: 99.3237% ( 6) 00:08:23.785 17140.185 - 17241.009: 99.3412% ( 3) 00:08:23.785 17241.009 - 17341.834: 99.3762% ( 6) 00:08:23.785 17341.834 - 17442.658: 99.3995% ( 4) 00:08:23.785 17442.658 - 17543.483: 99.4345% ( 6) 00:08:23.785 17543.483 - 17644.308: 99.4578% ( 4) 00:08:23.785 17644.308 - 17745.132: 99.4811% ( 4) 00:08:23.785 17745.132 - 17845.957: 99.5103% ( 5) 00:08:23.785 17845.957 - 17946.782: 99.5278% ( 3) 00:08:23.785 17946.782 - 18047.606: 99.5452% ( 3) 00:08:23.785 18047.606 - 18148.431: 99.5686% ( 4) 00:08:23.785 18148.431 - 18249.255: 99.5919% ( 4) 00:08:23.785 18249.255 - 18350.080: 99.6094% ( 3) 00:08:23.785 18350.080 - 18450.905: 99.6269% ( 3) 00:08:23.785 23290.486 - 23391.311: 99.6327% ( 1) 00:08:23.785 23391.311 - 23492.135: 99.6677% ( 6) 00:08:23.785 23492.135 - 23592.960: 99.7610% ( 16) 00:08:23.785 23592.960 - 23693.785: 99.8251% ( 11) 00:08:23.785 23693.785 - 23794.609: 99.8659% ( 7) 00:08:23.785 23794.609 - 23895.434: 99.8834% ( 3) 00:08:23.785 23895.434 - 23996.258: 99.9125% ( 5) 00:08:23.785 23996.258 - 24097.083: 99.9534% ( 7) 00:08:23.785 24097.083 - 24197.908: 99.9767% ( 4) 00:08:23.785 24197.908 - 24298.732: 99.9942% ( 3) 00:08:23.785 24298.732 - 24399.557: 100.0000% ( 1) 00:08:23.785 00:08:23.786 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.786 ============================================================================== 00:08:23.786 Range in us Cumulative IO count 00:08:23.786 3982.572 - 4007.778: 0.0117% ( 2) 00:08:23.786 4007.778 - 4032.985: 0.0350% ( 4) 00:08:23.786 4032.985 - 4058.191: 0.0641% ( 5) 00:08:23.786 4058.191 - 4083.397: 0.0816% ( 3) 00:08:23.786 4083.397 - 4108.603: 0.1458% ( 11) 00:08:23.786 4108.603 - 4133.809: 0.2041% ( 10) 00:08:23.786 4133.809 - 4159.015: 0.2449% ( 7) 00:08:23.786 4159.015 - 4184.222: 0.2565% ( 2) 00:08:23.786 4184.222 - 4209.428: 0.2682% ( 2) 00:08:23.786 4209.428 - 4234.634: 0.2799% ( 2) 00:08:23.786 4234.634 - 4259.840: 0.2915% ( 2) 00:08:23.786 4259.840 - 4285.046: 0.3032% ( 2) 00:08:23.786 4285.046 - 4310.252: 0.3148% ( 2) 00:08:23.786 4310.252 - 4335.458: 0.3265% ( 2) 00:08:23.786 4335.458 - 4360.665: 0.3323% ( 1) 00:08:23.786 4360.665 - 4385.871: 0.3440% ( 2) 00:08:23.786 4385.871 - 4411.077: 0.3498% ( 1) 00:08:23.786 4411.077 - 4436.283: 0.3615% ( 2) 00:08:23.786 4436.283 - 4461.489: 0.3731% ( 2) 00:08:23.786 5268.086 - 5293.292: 0.3790% ( 1) 00:08:23.786 5293.292 - 5318.498: 0.4023% ( 4) 00:08:23.786 5318.498 - 5343.705: 0.4198% ( 3) 00:08:23.786 5343.705 - 5368.911: 0.4431% ( 4) 00:08:23.786 5368.911 - 5394.117: 0.4606% ( 3) 00:08:23.786 5394.117 - 5419.323: 0.5306% ( 12) 00:08:23.786 5419.323 - 5444.529: 0.5947% ( 11) 00:08:23.786 5444.529 - 5469.735: 0.6122% ( 3) 00:08:23.786 5469.735 - 5494.942: 0.6238% ( 2) 00:08:23.786 5494.942 - 5520.148: 0.6355% ( 2) 00:08:23.786 5520.148 - 5545.354: 0.6413% ( 1) 00:08:23.786 5545.354 - 5570.560: 0.6588% ( 3) 00:08:23.786 5570.560 - 5595.766: 0.6646% ( 1) 00:08:23.786 5595.766 - 5620.972: 0.6763% ( 2) 00:08:23.786 5620.972 - 5646.178: 0.6880% ( 2) 00:08:23.786 5646.178 - 5671.385: 0.6996% ( 2) 00:08:23.786 5671.385 - 5696.591: 0.7113% ( 2) 00:08:23.786 5696.591 - 5721.797: 0.7229% ( 2) 00:08:23.786 5721.797 - 5747.003: 0.7288% ( 1) 00:08:23.786 5747.003 - 5772.209: 0.7404% ( 2) 00:08:23.786 5772.209 - 5797.415: 0.7463% ( 1) 00:08:23.786 5797.415 - 5822.622: 0.7521% ( 1) 00:08:23.786 5822.622 - 5847.828: 0.7579% ( 1) 00:08:23.786 5873.034 - 5898.240: 0.7754% ( 3) 00:08:23.786 5898.240 - 5923.446: 0.8162% ( 7) 00:08:23.786 5923.446 - 5948.652: 0.8687% ( 9) 00:08:23.786 5948.652 - 5973.858: 0.9153% ( 8) 00:08:23.786 5973.858 - 5999.065: 1.0086% ( 16) 00:08:23.786 5999.065 - 6024.271: 1.0553% ( 8) 00:08:23.786 6024.271 - 6049.477: 1.1486% ( 16) 00:08:23.786 6049.477 - 6074.683: 1.6033% ( 78) 00:08:23.786 6074.683 - 6099.889: 1.9881% ( 66) 00:08:23.786 6099.889 - 6125.095: 2.2854% ( 51) 00:08:23.786 6125.095 - 6150.302: 2.5944% ( 53) 00:08:23.786 6150.302 - 6175.508: 2.7285% ( 23) 00:08:23.786 6175.508 - 6200.714: 2.8568% ( 22) 00:08:23.786 6200.714 - 6225.920: 3.3990% ( 93) 00:08:23.786 6225.920 - 6251.126: 3.6964% ( 51) 00:08:23.786 6251.126 - 6276.332: 4.0345% ( 58) 00:08:23.786 6276.332 - 6301.538: 4.5359% ( 86) 00:08:23.786 6301.538 - 6326.745: 4.6875% ( 26) 00:08:23.786 6326.745 - 6351.951: 4.7924% ( 18) 00:08:23.786 6351.951 - 6377.157: 4.9499% ( 27) 00:08:23.786 6377.157 - 6402.363: 5.1131% ( 28) 00:08:23.786 6402.363 - 6427.569: 5.5737% ( 79) 00:08:23.786 6427.569 - 6452.775: 5.8477% ( 47) 00:08:23.786 6452.775 - 6503.188: 6.9263% ( 185) 00:08:23.786 6503.188 - 6553.600: 8.1215% ( 205) 00:08:23.786 6553.600 - 6604.012: 9.5382% ( 243) 00:08:23.786 6604.012 - 6654.425: 12.0103% ( 424) 00:08:23.786 6654.425 - 6704.837: 14.8146% ( 481) 00:08:23.786 6704.837 - 6755.249: 18.2078% ( 582) 00:08:23.786 6755.249 - 6805.662: 22.8603% ( 798) 00:08:23.786 6805.662 - 6856.074: 27.4429% ( 786) 00:08:23.786 6856.074 - 6906.486: 33.9844% ( 1122) 00:08:23.786 6906.486 - 6956.898: 41.6919% ( 1322) 00:08:23.786 6956.898 - 7007.311: 49.9242% ( 1412) 00:08:23.786 7007.311 - 7057.723: 56.7106% ( 1164) 00:08:23.786 7057.723 - 7108.135: 62.2318% ( 947) 00:08:23.786 7108.135 - 7158.548: 67.5840% ( 918) 00:08:23.786 7158.548 - 7208.960: 71.4028% ( 655) 00:08:23.786 7208.960 - 7259.372: 74.6793% ( 562) 00:08:23.786 7259.372 - 7309.785: 77.0756% ( 411) 00:08:23.786 7309.785 - 7360.197: 79.1511% ( 356) 00:08:23.786 7360.197 - 7410.609: 80.5620% ( 242) 00:08:23.786 7410.609 - 7461.022: 81.8389% ( 219) 00:08:23.786 7461.022 - 7511.434: 82.4802% ( 110) 00:08:23.786 7511.434 - 7561.846: 82.9874% ( 87) 00:08:23.786 7561.846 - 7612.258: 83.7045% ( 123) 00:08:23.786 7612.258 - 7662.671: 84.2001% ( 85) 00:08:23.786 7662.671 - 7713.083: 84.7773% ( 99) 00:08:23.786 7713.083 - 7763.495: 85.2437% ( 80) 00:08:23.786 7763.495 - 7813.908: 85.8384% ( 102) 00:08:23.786 7813.908 - 7864.320: 86.3573% ( 89) 00:08:23.786 7864.320 - 7914.732: 86.8412% ( 83) 00:08:23.786 7914.732 - 7965.145: 87.2376% ( 68) 00:08:23.786 7965.145 - 8015.557: 87.5233% ( 49) 00:08:23.786 8015.557 - 8065.969: 87.8148% ( 50) 00:08:23.786 8065.969 - 8116.382: 88.0539% ( 41) 00:08:23.786 8116.382 - 8166.794: 88.2987% ( 42) 00:08:23.786 8166.794 - 8217.206: 88.6077% ( 53) 00:08:23.786 8217.206 - 8267.618: 89.0042% ( 68) 00:08:23.786 8267.618 - 8318.031: 89.7505% ( 128) 00:08:23.786 8318.031 - 8368.443: 90.1003% ( 60) 00:08:23.786 8368.443 - 8418.855: 90.6950% ( 102) 00:08:23.786 8418.855 - 8469.268: 91.0098% ( 54) 00:08:23.786 8469.268 - 8519.680: 91.2372% ( 39) 00:08:23.786 8519.680 - 8570.092: 91.4937% ( 44) 00:08:23.786 8570.092 - 8620.505: 91.7619% ( 46) 00:08:23.786 8620.505 - 8670.917: 91.9368% ( 30) 00:08:23.786 8670.917 - 8721.329: 92.0651% ( 22) 00:08:23.786 8721.329 - 8771.742: 92.1758% ( 19) 00:08:23.786 8771.742 - 8822.154: 92.3682% ( 33) 00:08:23.786 8822.154 - 8872.566: 92.5606% ( 33) 00:08:23.786 8872.566 - 8922.978: 92.7414% ( 31) 00:08:23.786 8922.978 - 8973.391: 92.9629% ( 38) 00:08:23.786 8973.391 - 9023.803: 93.0912% ( 22) 00:08:23.786 9023.803 - 9074.215: 93.2136% ( 21) 00:08:23.786 9074.215 - 9124.628: 93.3535% ( 24) 00:08:23.786 9124.628 - 9175.040: 93.5401% ( 32) 00:08:23.786 9175.040 - 9225.452: 93.6567% ( 20) 00:08:23.786 9225.452 - 9275.865: 93.7208% ( 11) 00:08:23.786 9275.865 - 9326.277: 93.7792% ( 10) 00:08:23.786 9326.277 - 9376.689: 93.8258% ( 8) 00:08:23.786 9376.689 - 9427.102: 93.8724% ( 8) 00:08:23.786 9427.102 - 9477.514: 93.9132% ( 7) 00:08:23.786 9477.514 - 9527.926: 93.9482% ( 6) 00:08:23.786 9527.926 - 9578.338: 93.9832% ( 6) 00:08:23.786 9578.338 - 9628.751: 94.0590% ( 13) 00:08:23.786 9628.751 - 9679.163: 94.1639% ( 18) 00:08:23.786 9679.163 - 9729.575: 94.2456% ( 14) 00:08:23.786 9729.575 - 9779.988: 94.2864% ( 7) 00:08:23.786 9779.988 - 9830.400: 94.2980% ( 2) 00:08:23.786 9830.400 - 9880.812: 94.3097% ( 2) 00:08:23.786 9880.812 - 9931.225: 94.3447% ( 6) 00:08:23.786 9931.225 - 9981.637: 94.3738% ( 5) 00:08:23.786 9981.637 - 10032.049: 94.4030% ( 5) 00:08:23.786 10032.049 - 10082.462: 94.4263% ( 4) 00:08:23.786 10082.462 - 10132.874: 94.4555% ( 5) 00:08:23.786 10132.874 - 10183.286: 94.4963% ( 7) 00:08:23.786 10183.286 - 10233.698: 94.5196% ( 4) 00:08:23.786 10233.698 - 10284.111: 94.5487% ( 5) 00:08:23.786 10284.111 - 10334.523: 94.6129% ( 11) 00:08:23.786 10334.523 - 10384.935: 94.6479% ( 6) 00:08:23.786 10384.935 - 10435.348: 94.7178% ( 12) 00:08:23.786 10435.348 - 10485.760: 94.7645% ( 8) 00:08:23.786 10485.760 - 10536.172: 94.7761% ( 2) 00:08:23.786 10989.883 - 11040.295: 94.7819% ( 1) 00:08:23.786 11040.295 - 11090.708: 94.8286% ( 8) 00:08:23.786 11090.708 - 11141.120: 94.9510% ( 21) 00:08:23.786 11141.120 - 11191.532: 95.0851% ( 23) 00:08:23.786 11191.532 - 11241.945: 95.2892% ( 35) 00:08:23.786 11241.945 - 11292.357: 95.4233% ( 23) 00:08:23.786 11292.357 - 11342.769: 95.5924% ( 29) 00:08:23.786 11342.769 - 11393.182: 95.6915% ( 17) 00:08:23.786 11393.182 - 11443.594: 95.7498% ( 10) 00:08:23.786 11443.594 - 11494.006: 95.8197% ( 12) 00:08:23.786 11494.006 - 11544.418: 95.8839% ( 11) 00:08:23.786 11544.418 - 11594.831: 95.9713% ( 15) 00:08:23.786 11594.831 - 11645.243: 96.0704% ( 17) 00:08:23.786 11645.243 - 11695.655: 96.1695% ( 17) 00:08:23.786 11695.655 - 11746.068: 96.2628% ( 16) 00:08:23.786 11746.068 - 11796.480: 96.3386% ( 13) 00:08:23.786 11796.480 - 11846.892: 96.3911% ( 9) 00:08:23.786 11846.892 - 11897.305: 96.4494% ( 10) 00:08:23.786 11897.305 - 11947.717: 96.4902% ( 7) 00:08:23.786 11947.717 - 11998.129: 96.5194% ( 5) 00:08:23.786 11998.129 - 12048.542: 96.5660% ( 8) 00:08:23.786 12048.542 - 12098.954: 96.5951% ( 5) 00:08:23.786 12098.954 - 12149.366: 96.6360% ( 7) 00:08:23.786 12149.366 - 12199.778: 96.6593% ( 4) 00:08:23.786 12199.778 - 12250.191: 96.6826% ( 4) 00:08:23.786 12250.191 - 12300.603: 96.7001% ( 3) 00:08:23.786 12300.603 - 12351.015: 96.7292% ( 5) 00:08:23.786 12351.015 - 12401.428: 96.7584% ( 5) 00:08:23.786 12401.428 - 12451.840: 96.7875% ( 5) 00:08:23.786 12451.840 - 12502.252: 96.8167% ( 5) 00:08:23.786 12502.252 - 12552.665: 96.8633% ( 8) 00:08:23.786 12552.665 - 12603.077: 96.9100% ( 8) 00:08:23.786 12603.077 - 12653.489: 96.9566% ( 8) 00:08:23.786 12653.489 - 12703.902: 97.0499% ( 16) 00:08:23.786 12703.902 - 12754.314: 97.1199% ( 12) 00:08:23.786 12754.314 - 12804.726: 97.1723% ( 9) 00:08:23.786 12804.726 - 12855.138: 97.2190% ( 8) 00:08:23.787 12855.138 - 12905.551: 97.2889% ( 12) 00:08:23.787 12905.551 - 13006.375: 97.4464% ( 27) 00:08:23.787 13006.375 - 13107.200: 97.7612% ( 54) 00:08:23.787 13107.200 - 13208.025: 97.9361% ( 30) 00:08:23.787 13208.025 - 13308.849: 98.0935% ( 27) 00:08:23.787 13308.849 - 13409.674: 98.2276% ( 23) 00:08:23.787 13409.674 - 13510.498: 98.3442% ( 20) 00:08:23.787 13510.498 - 13611.323: 98.4841% ( 24) 00:08:23.787 13611.323 - 13712.148: 98.6474% ( 28) 00:08:23.787 13712.148 - 13812.972: 98.8281% ( 31) 00:08:23.787 13812.972 - 13913.797: 98.8981% ( 12) 00:08:23.787 13913.797 - 14014.622: 99.0089% ( 19) 00:08:23.787 14014.622 - 14115.446: 99.0438% ( 6) 00:08:23.787 14115.446 - 14216.271: 99.0788% ( 6) 00:08:23.787 14216.271 - 14317.095: 99.1196% ( 7) 00:08:23.787 14317.095 - 14417.920: 99.1604% ( 7) 00:08:23.787 14417.920 - 14518.745: 99.1954% ( 6) 00:08:23.787 14518.745 - 14619.569: 99.2304% ( 6) 00:08:23.787 14619.569 - 14720.394: 99.2479% ( 3) 00:08:23.787 14720.394 - 14821.218: 99.2537% ( 1) 00:08:23.787 16636.062 - 16736.886: 99.2596% ( 1) 00:08:23.787 16736.886 - 16837.711: 99.2829% ( 4) 00:08:23.787 16837.711 - 16938.535: 99.3062% ( 4) 00:08:23.787 16938.535 - 17039.360: 99.3354% ( 5) 00:08:23.787 17039.360 - 17140.185: 99.3587% ( 4) 00:08:23.787 17140.185 - 17241.009: 99.3878% ( 5) 00:08:23.787 17241.009 - 17341.834: 99.4170% ( 5) 00:08:23.787 17341.834 - 17442.658: 99.4520% ( 6) 00:08:23.787 17442.658 - 17543.483: 99.4811% ( 5) 00:08:23.787 17543.483 - 17644.308: 99.5044% ( 4) 00:08:23.787 17644.308 - 17745.132: 99.5278% ( 4) 00:08:23.787 17745.132 - 17845.957: 99.5452% ( 3) 00:08:23.787 17845.957 - 17946.782: 99.5627% ( 3) 00:08:23.787 17946.782 - 18047.606: 99.5861% ( 4) 00:08:23.787 18047.606 - 18148.431: 99.6035% ( 3) 00:08:23.787 18148.431 - 18249.255: 99.6269% ( 4) 00:08:23.787 23088.837 - 23189.662: 99.6327% ( 1) 00:08:23.787 23189.662 - 23290.486: 99.6618% ( 5) 00:08:23.787 23290.486 - 23391.311: 99.7318% ( 12) 00:08:23.787 23391.311 - 23492.135: 99.8368% ( 18) 00:08:23.787 23492.135 - 23592.960: 99.9475% ( 19) 00:08:23.787 23693.785 - 23794.609: 99.9534% ( 1) 00:08:23.787 23794.609 - 23895.434: 99.9708% ( 3) 00:08:23.787 23895.434 - 23996.258: 99.9883% ( 3) 00:08:23.787 23996.258 - 24097.083: 100.0000% ( 2) 00:08:23.787 00:08:23.787 00:40:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:23.787 00:08:23.787 real 0m2.462s 00:08:23.787 user 0m2.163s 00:08:23.787 sys 0m0.183s 00:08:23.787 00:40:15 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.787 00:40:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:23.787 ************************************ 00:08:23.787 END TEST nvme_perf 00:08:23.787 ************************************ 00:08:23.787 00:40:15 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:23.787 00:40:15 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:23.787 00:40:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.787 00:40:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.787 ************************************ 00:08:23.787 START TEST nvme_hello_world 00:08:23.787 ************************************ 00:08:23.787 00:40:15 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:23.787 Initializing NVMe Controllers 00:08:23.787 Attached to 0000:00:10.0 00:08:23.787 Namespace ID: 1 size: 6GB 00:08:23.787 Attached to 0000:00:11.0 00:08:23.787 Namespace ID: 1 size: 5GB 00:08:23.787 Attached to 0000:00:13.0 00:08:23.787 Namespace ID: 1 size: 1GB 00:08:23.787 Attached to 0000:00:12.0 00:08:23.787 Namespace ID: 1 size: 4GB 00:08:23.787 Namespace ID: 2 size: 4GB 00:08:23.787 Namespace ID: 3 size: 4GB 00:08:23.787 Initialization complete. 00:08:23.787 INFO: using host memory buffer for IO 00:08:23.787 Hello world! 00:08:23.787 INFO: using host memory buffer for IO 00:08:23.787 Hello world! 00:08:23.787 INFO: using host memory buffer for IO 00:08:23.787 Hello world! 00:08:23.787 INFO: using host memory buffer for IO 00:08:23.787 Hello world! 00:08:23.787 INFO: using host memory buffer for IO 00:08:23.787 Hello world! 00:08:23.787 INFO: using host memory buffer for IO 00:08:23.787 Hello world! 00:08:23.787 00:08:23.787 real 0m0.184s 00:08:23.787 user 0m0.063s 00:08:23.787 sys 0m0.084s 00:08:23.787 00:40:15 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.787 00:40:15 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:23.787 ************************************ 00:08:23.787 END TEST nvme_hello_world 00:08:23.787 ************************************ 00:08:23.787 00:40:15 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:23.787 00:40:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.787 00:40:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.787 00:40:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.787 ************************************ 00:08:23.787 START TEST nvme_sgl 00:08:23.787 ************************************ 00:08:23.787 00:40:15 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.045 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:24.045 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:24.045 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:24.045 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:24.045 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:24.045 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:24.045 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:24.045 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:24.045 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:24.045 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:24.045 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:24.045 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:24.045 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:24.045 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:24.045 NVMe Readv/Writev Request test 00:08:24.045 Attached to 0000:00:10.0 00:08:24.045 Attached to 0000:00:11.0 00:08:24.045 Attached to 0000:00:13.0 00:08:24.045 Attached to 0000:00:12.0 00:08:24.045 0000:00:10.0: build_io_request_2 test passed 00:08:24.045 0000:00:10.0: build_io_request_4 test passed 00:08:24.045 0000:00:10.0: build_io_request_5 test passed 00:08:24.045 0000:00:10.0: build_io_request_6 test passed 00:08:24.045 0000:00:10.0: build_io_request_7 test passed 00:08:24.045 0000:00:10.0: build_io_request_10 test passed 00:08:24.046 0000:00:11.0: build_io_request_2 test passed 00:08:24.046 0000:00:11.0: build_io_request_4 test passed 00:08:24.046 0000:00:11.0: build_io_request_5 test passed 00:08:24.046 0000:00:11.0: build_io_request_6 test passed 00:08:24.046 0000:00:11.0: build_io_request_7 test passed 00:08:24.046 0000:00:11.0: build_io_request_10 test passed 00:08:24.046 Cleaning up... 00:08:24.046 00:08:24.046 real 0m0.237s 00:08:24.046 user 0m0.111s 00:08:24.046 sys 0m0.084s 00:08:24.046 00:40:15 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.046 00:40:15 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:24.046 ************************************ 00:08:24.046 END TEST nvme_sgl 00:08:24.046 ************************************ 00:08:24.046 00:40:15 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:24.046 00:40:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.046 00:40:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.046 00:40:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.046 ************************************ 00:08:24.046 START TEST nvme_e2edp 00:08:24.046 ************************************ 00:08:24.046 00:40:15 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:24.304 NVMe Write/Read with End-to-End data protection test 00:08:24.304 Attached to 0000:00:10.0 00:08:24.304 Attached to 0000:00:11.0 00:08:24.304 Attached to 0000:00:13.0 00:08:24.304 Attached to 0000:00:12.0 00:08:24.304 Cleaning up... 00:08:24.304 00:08:24.304 real 0m0.176s 00:08:24.304 user 0m0.051s 00:08:24.304 sys 0m0.085s 00:08:24.304 ************************************ 00:08:24.304 END TEST nvme_e2edp 00:08:24.304 ************************************ 00:08:24.304 00:40:16 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.304 00:40:16 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:24.304 00:40:16 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:24.304 00:40:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.304 00:40:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.304 00:40:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.304 ************************************ 00:08:24.304 START TEST nvme_reserve 00:08:24.304 ************************************ 00:08:24.304 00:40:16 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:24.562 ===================================================== 00:08:24.562 NVMe Controller at PCI bus 0, device 16, function 0 00:08:24.562 ===================================================== 00:08:24.562 Reservations: Not Supported 00:08:24.562 ===================================================== 00:08:24.562 NVMe Controller at PCI bus 0, device 17, function 0 00:08:24.562 ===================================================== 00:08:24.562 Reservations: Not Supported 00:08:24.562 ===================================================== 00:08:24.562 NVMe Controller at PCI bus 0, device 19, function 0 00:08:24.562 ===================================================== 00:08:24.563 Reservations: Not Supported 00:08:24.563 ===================================================== 00:08:24.563 NVMe Controller at PCI bus 0, device 18, function 0 00:08:24.563 ===================================================== 00:08:24.563 Reservations: Not Supported 00:08:24.563 Reservation test passed 00:08:24.563 00:08:24.563 real 0m0.191s 00:08:24.563 user 0m0.057s 00:08:24.563 sys 0m0.086s 00:08:24.563 ************************************ 00:08:24.563 END TEST nvme_reserve 00:08:24.563 ************************************ 00:08:24.563 00:40:16 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.563 00:40:16 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:24.563 00:40:16 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:24.563 00:40:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.563 00:40:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.563 00:40:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.563 ************************************ 00:08:24.563 START TEST nvme_err_injection 00:08:24.563 ************************************ 00:08:24.563 00:40:16 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:24.563 NVMe Error Injection test 00:08:24.563 Attached to 0000:00:10.0 00:08:24.563 Attached to 0000:00:11.0 00:08:24.563 Attached to 0000:00:13.0 00:08:24.563 Attached to 0000:00:12.0 00:08:24.563 0000:00:10.0: get features failed as expected 00:08:24.563 0000:00:11.0: get features failed as expected 00:08:24.563 0000:00:13.0: get features failed as expected 00:08:24.563 0000:00:12.0: get features failed as expected 00:08:24.563 0000:00:10.0: get features successfully as expected 00:08:24.563 0000:00:11.0: get features successfully as expected 00:08:24.563 0000:00:13.0: get features successfully as expected 00:08:24.563 0000:00:12.0: get features successfully as expected 00:08:24.563 0000:00:10.0: read failed as expected 00:08:24.563 0000:00:11.0: read failed as expected 00:08:24.563 0000:00:13.0: read failed as expected 00:08:24.563 0000:00:12.0: read failed as expected 00:08:24.563 0000:00:11.0: read successfully as expected 00:08:24.563 0000:00:13.0: read successfully as expected 00:08:24.563 0000:00:12.0: read successfully as expected 00:08:24.563 0000:00:10.0: read successfully as expected 00:08:24.563 Cleaning up... 00:08:24.563 00:08:24.563 real 0m0.178s 00:08:24.563 user 0m0.062s 00:08:24.563 sys 0m0.078s 00:08:24.563 ************************************ 00:08:24.563 END TEST nvme_err_injection 00:08:24.563 ************************************ 00:08:24.563 00:40:16 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.563 00:40:16 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:24.821 00:40:16 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:24.821 00:40:16 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:24.821 00:40:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.821 00:40:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.821 ************************************ 00:08:24.821 START TEST nvme_overhead 00:08:24.821 ************************************ 00:08:24.821 00:40:16 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:25.755 Initializing NVMe Controllers 00:08:25.755 Attached to 0000:00:10.0 00:08:25.755 Attached to 0000:00:11.0 00:08:25.755 Attached to 0000:00:13.0 00:08:25.755 Attached to 0000:00:12.0 00:08:25.755 Initialization complete. Launching workers. 00:08:25.755 submit (in ns) avg, min, max = 12154.3, 11260.8, 78439.2 00:08:25.755 complete (in ns) avg, min, max = 7608.0, 7222.3, 289003.1 00:08:25.755 00:08:25.755 Submit histogram 00:08:25.755 ================ 00:08:25.755 Range in us Cumulative Count 00:08:25.755 11.225 - 11.274: 0.0123% ( 2) 00:08:25.755 11.274 - 11.323: 0.0675% ( 9) 00:08:25.755 11.323 - 11.372: 0.2701% ( 33) 00:08:25.755 11.372 - 11.422: 1.1172% ( 138) 00:08:25.755 11.422 - 11.471: 3.6032% ( 405) 00:08:25.755 11.471 - 11.520: 7.9615% ( 710) 00:08:25.755 11.520 - 11.569: 13.9279% ( 972) 00:08:25.755 11.569 - 11.618: 21.6930% ( 1265) 00:08:25.755 11.618 - 11.668: 30.2191% ( 1389) 00:08:25.755 11.668 - 11.717: 38.4936% ( 1348) 00:08:25.755 11.717 - 11.766: 45.8720% ( 1202) 00:08:25.755 11.766 - 11.815: 51.8998% ( 982) 00:08:25.755 11.815 - 11.865: 56.3194% ( 720) 00:08:25.755 11.865 - 11.914: 60.2419% ( 639) 00:08:25.755 11.914 - 11.963: 63.0164% ( 452) 00:08:25.755 11.963 - 12.012: 65.0420% ( 330) 00:08:25.755 12.012 - 12.062: 66.4723% ( 233) 00:08:25.755 12.062 - 12.111: 67.5158% ( 170) 00:08:25.755 12.111 - 12.160: 68.3997% ( 144) 00:08:25.755 12.160 - 12.209: 69.0258% ( 102) 00:08:25.755 12.209 - 12.258: 69.5353% ( 83) 00:08:25.755 12.258 - 12.308: 70.0141% ( 78) 00:08:25.755 12.308 - 12.357: 70.7630% ( 122) 00:08:25.755 12.357 - 12.406: 71.6408% ( 143) 00:08:25.755 12.406 - 12.455: 72.8807% ( 202) 00:08:25.755 12.455 - 12.505: 74.5320% ( 269) 00:08:25.755 12.505 - 12.554: 76.5576% ( 330) 00:08:25.755 12.554 - 12.603: 78.9761% ( 394) 00:08:25.755 12.603 - 12.702: 84.1814% ( 848) 00:08:25.755 12.702 - 12.800: 88.6440% ( 727) 00:08:25.755 12.800 - 12.898: 91.9833% ( 544) 00:08:25.755 12.898 - 12.997: 94.1379% ( 351) 00:08:25.755 12.997 - 13.095: 95.3348% ( 195) 00:08:25.755 13.095 - 13.194: 96.0346% ( 114) 00:08:25.755 13.194 - 13.292: 96.3906% ( 58) 00:08:25.755 13.292 - 13.391: 96.4704% ( 13) 00:08:25.755 13.391 - 13.489: 96.5380% ( 11) 00:08:25.755 13.489 - 13.588: 96.6178% ( 13) 00:08:25.755 13.588 - 13.686: 96.7098% ( 15) 00:08:25.755 13.686 - 13.785: 96.8387% ( 21) 00:08:25.755 13.785 - 13.883: 96.9615% ( 20) 00:08:25.755 13.883 - 13.982: 97.0659% ( 17) 00:08:25.755 13.982 - 14.080: 97.1641% ( 16) 00:08:25.755 14.080 - 14.178: 97.2562% ( 15) 00:08:25.755 14.178 - 14.277: 97.3482% ( 15) 00:08:25.755 14.277 - 14.375: 97.4219% ( 12) 00:08:25.755 14.375 - 14.474: 97.4710% ( 8) 00:08:25.755 14.474 - 14.572: 97.5324% ( 10) 00:08:25.755 14.572 - 14.671: 97.5876% ( 9) 00:08:25.755 14.671 - 14.769: 97.6981% ( 18) 00:08:25.755 14.769 - 14.868: 97.7349% ( 6) 00:08:25.755 14.868 - 14.966: 97.7963% ( 10) 00:08:25.755 14.966 - 15.065: 97.8332% ( 6) 00:08:25.755 15.065 - 15.163: 97.8884% ( 9) 00:08:25.755 15.163 - 15.262: 97.9682% ( 13) 00:08:25.755 15.262 - 15.360: 97.9989% ( 5) 00:08:25.755 15.360 - 15.458: 98.0296% ( 5) 00:08:25.755 15.458 - 15.557: 98.0541% ( 4) 00:08:25.755 15.557 - 15.655: 98.0910% ( 6) 00:08:25.755 15.655 - 15.754: 98.1155% ( 4) 00:08:25.755 15.754 - 15.852: 98.1462% ( 5) 00:08:25.755 15.852 - 15.951: 98.1585% ( 2) 00:08:25.755 15.951 - 16.049: 98.1769% ( 3) 00:08:25.755 16.049 - 16.148: 98.2199% ( 7) 00:08:25.755 16.148 - 16.246: 98.2322% ( 2) 00:08:25.755 16.246 - 16.345: 98.2567% ( 4) 00:08:25.755 16.345 - 16.443: 98.2690% ( 2) 00:08:25.755 16.443 - 16.542: 98.3058% ( 6) 00:08:25.755 16.542 - 16.640: 98.3120% ( 1) 00:08:25.756 16.640 - 16.738: 98.3488% ( 6) 00:08:25.756 16.738 - 16.837: 98.3611% ( 2) 00:08:25.756 16.837 - 16.935: 98.3795% ( 3) 00:08:25.756 16.935 - 17.034: 98.4286% ( 8) 00:08:25.756 17.034 - 17.132: 98.5329% ( 17) 00:08:25.756 17.132 - 17.231: 98.6005% ( 11) 00:08:25.756 17.231 - 17.329: 98.6864% ( 14) 00:08:25.756 17.329 - 17.428: 98.7785% ( 15) 00:08:25.756 17.428 - 17.526: 98.8705% ( 15) 00:08:25.756 17.526 - 17.625: 98.9688% ( 16) 00:08:25.756 17.625 - 17.723: 99.0117% ( 7) 00:08:25.756 17.723 - 17.822: 99.0792% ( 11) 00:08:25.756 17.822 - 17.920: 99.1775% ( 16) 00:08:25.756 17.920 - 18.018: 99.2266% ( 8) 00:08:25.756 18.018 - 18.117: 99.3248% ( 16) 00:08:25.756 18.117 - 18.215: 99.3616% ( 6) 00:08:25.756 18.215 - 18.314: 99.4475% ( 14) 00:08:25.756 18.314 - 18.412: 99.5028% ( 9) 00:08:25.756 18.412 - 18.511: 99.5273% ( 4) 00:08:25.756 18.511 - 18.609: 99.5335% ( 1) 00:08:25.756 18.609 - 18.708: 99.5949% ( 10) 00:08:25.756 18.708 - 18.806: 99.6071% ( 2) 00:08:25.756 18.806 - 18.905: 99.6378% ( 5) 00:08:25.756 18.905 - 19.003: 99.6440% ( 1) 00:08:25.756 19.003 - 19.102: 99.6992% ( 9) 00:08:25.756 19.102 - 19.200: 99.7176% ( 3) 00:08:25.756 19.200 - 19.298: 99.7422% ( 4) 00:08:25.756 19.298 - 19.397: 99.7606% ( 3) 00:08:25.756 19.397 - 19.495: 99.7729% ( 2) 00:08:25.756 19.594 - 19.692: 99.7852% ( 2) 00:08:25.756 19.889 - 19.988: 99.7913% ( 1) 00:08:25.756 19.988 - 20.086: 99.8036% ( 2) 00:08:25.756 20.185 - 20.283: 99.8158% ( 2) 00:08:25.756 20.677 - 20.775: 99.8220% ( 1) 00:08:25.756 20.775 - 20.874: 99.8281% ( 1) 00:08:25.756 20.874 - 20.972: 99.8343% ( 1) 00:08:25.756 21.169 - 21.268: 99.8465% ( 2) 00:08:25.756 21.366 - 21.465: 99.8527% ( 1) 00:08:25.756 21.662 - 21.760: 99.8588% ( 1) 00:08:25.756 21.858 - 21.957: 99.8711% ( 2) 00:08:25.756 22.055 - 22.154: 99.8772% ( 1) 00:08:25.756 22.154 - 22.252: 99.8834% ( 1) 00:08:25.756 22.252 - 22.351: 99.8895% ( 1) 00:08:25.756 22.449 - 22.548: 99.9018% ( 2) 00:08:25.756 22.646 - 22.745: 99.9079% ( 1) 00:08:25.756 22.843 - 22.942: 99.9141% ( 1) 00:08:25.756 23.040 - 23.138: 99.9202% ( 1) 00:08:25.756 24.517 - 24.615: 99.9263% ( 1) 00:08:25.756 25.206 - 25.403: 99.9325% ( 1) 00:08:25.756 26.388 - 26.585: 99.9386% ( 1) 00:08:25.756 26.585 - 26.782: 99.9448% ( 1) 00:08:25.756 27.963 - 28.160: 99.9509% ( 1) 00:08:25.756 30.720 - 30.917: 99.9570% ( 1) 00:08:25.756 30.917 - 31.114: 99.9632% ( 1) 00:08:25.756 31.508 - 31.705: 99.9693% ( 1) 00:08:25.756 34.265 - 34.462: 99.9754% ( 1) 00:08:25.756 36.431 - 36.628: 99.9816% ( 1) 00:08:25.756 40.960 - 41.157: 99.9877% ( 1) 00:08:25.756 51.200 - 51.594: 99.9939% ( 1) 00:08:25.756 78.375 - 78.769: 100.0000% ( 1) 00:08:25.756 00:08:25.756 Complete histogram 00:08:25.756 ================== 00:08:25.756 Range in us Cumulative Count 00:08:25.756 7.188 - 7.237: 0.0430% ( 7) 00:08:25.756 7.237 - 7.286: 0.6016% ( 91) 00:08:25.756 7.286 - 7.335: 3.3270% ( 444) 00:08:25.756 7.335 - 7.385: 14.0323% ( 1744) 00:08:25.756 7.385 - 7.434: 38.1867% ( 3935) 00:08:25.756 7.434 - 7.483: 63.8328% ( 4178) 00:08:25.756 7.483 - 7.532: 80.9588% ( 2790) 00:08:25.756 7.532 - 7.582: 89.2517% ( 1351) 00:08:25.756 7.582 - 7.631: 93.4135% ( 678) 00:08:25.756 7.631 - 7.680: 95.5067% ( 341) 00:08:25.756 7.680 - 7.729: 96.7774% ( 207) 00:08:25.756 7.729 - 7.778: 97.4649% ( 112) 00:08:25.756 7.778 - 7.828: 97.9007% ( 71) 00:08:25.756 7.828 - 7.877: 98.1032% ( 33) 00:08:25.756 7.877 - 7.926: 98.2199% ( 19) 00:08:25.756 7.926 - 7.975: 98.2935% ( 12) 00:08:25.756 7.975 - 8.025: 98.3120% ( 3) 00:08:25.756 8.025 - 8.074: 98.3426% ( 5) 00:08:25.756 8.074 - 8.123: 98.3488% ( 1) 00:08:25.756 8.123 - 8.172: 98.3611% ( 2) 00:08:25.756 8.172 - 8.222: 98.3672% ( 1) 00:08:25.756 8.271 - 8.320: 98.3733% ( 1) 00:08:25.756 8.566 - 8.615: 98.3795% ( 1) 00:08:25.756 9.403 - 9.452: 98.3856% ( 1) 00:08:25.756 9.600 - 9.649: 98.3918% ( 1) 00:08:25.756 9.698 - 9.748: 98.3979% ( 1) 00:08:25.756 9.797 - 9.846: 98.4040% ( 1) 00:08:25.756 10.437 - 10.486: 98.4102% ( 1) 00:08:25.756 10.535 - 10.585: 98.4163% ( 1) 00:08:25.756 11.126 - 11.175: 98.4224% ( 1) 00:08:25.756 11.520 - 11.569: 98.4286% ( 1) 00:08:25.756 11.717 - 11.766: 98.4409% ( 2) 00:08:25.756 11.766 - 11.815: 98.4470% ( 1) 00:08:25.756 11.815 - 11.865: 98.4531% ( 1) 00:08:25.756 11.865 - 11.914: 98.4593% ( 1) 00:08:26.014 11.963 - 12.012: 98.4654% ( 1) 00:08:26.014 12.258 - 12.308: 98.4715% ( 1) 00:08:26.014 12.308 - 12.357: 98.4838% ( 2) 00:08:26.014 12.455 - 12.505: 98.4900% ( 1) 00:08:26.014 12.603 - 12.702: 98.5022% ( 2) 00:08:26.014 12.702 - 12.800: 98.5145% ( 2) 00:08:26.014 12.800 - 12.898: 98.5391% ( 4) 00:08:26.014 12.898 - 12.997: 98.6127% ( 12) 00:08:26.014 12.997 - 13.095: 98.7048% ( 15) 00:08:26.015 13.095 - 13.194: 98.8153% ( 18) 00:08:26.015 13.194 - 13.292: 98.8828% ( 11) 00:08:26.015 13.292 - 13.391: 98.9442% ( 10) 00:08:26.015 13.391 - 13.489: 99.0117% ( 11) 00:08:26.015 13.489 - 13.588: 99.1161% ( 17) 00:08:26.015 13.588 - 13.686: 99.1897% ( 12) 00:08:26.015 13.686 - 13.785: 99.2388% ( 8) 00:08:26.015 13.785 - 13.883: 99.3186% ( 13) 00:08:26.015 13.883 - 13.982: 99.3800% ( 10) 00:08:26.015 13.982 - 14.080: 99.4291% ( 8) 00:08:26.015 14.080 - 14.178: 99.5089% ( 13) 00:08:26.015 14.178 - 14.277: 99.5519% ( 7) 00:08:26.015 14.277 - 14.375: 99.5826% ( 5) 00:08:26.015 14.375 - 14.474: 99.6133% ( 5) 00:08:26.015 14.474 - 14.572: 99.6563% ( 7) 00:08:26.015 14.572 - 14.671: 99.6808% ( 4) 00:08:26.015 14.671 - 14.769: 99.6869% ( 1) 00:08:26.015 14.769 - 14.868: 99.7115% ( 4) 00:08:26.015 14.868 - 14.966: 99.7176% ( 1) 00:08:26.015 14.966 - 15.065: 99.7483% ( 5) 00:08:26.015 15.360 - 15.458: 99.7545% ( 1) 00:08:26.015 15.458 - 15.557: 99.7606% ( 1) 00:08:26.015 15.852 - 15.951: 99.7667% ( 1) 00:08:26.015 16.148 - 16.246: 99.7729% ( 1) 00:08:26.015 16.246 - 16.345: 99.7790% ( 1) 00:08:26.015 16.345 - 16.443: 99.7852% ( 1) 00:08:26.015 16.542 - 16.640: 99.7913% ( 1) 00:08:26.015 16.837 - 16.935: 99.7974% ( 1) 00:08:26.015 16.935 - 17.034: 99.8036% ( 1) 00:08:26.015 17.034 - 17.132: 99.8158% ( 2) 00:08:26.015 17.132 - 17.231: 99.8220% ( 1) 00:08:26.015 17.329 - 17.428: 99.8343% ( 2) 00:08:26.015 17.428 - 17.526: 99.8404% ( 1) 00:08:26.015 17.526 - 17.625: 99.8465% ( 1) 00:08:26.015 17.625 - 17.723: 99.8527% ( 1) 00:08:26.015 17.723 - 17.822: 99.8588% ( 1) 00:08:26.015 17.920 - 18.018: 99.8650% ( 1) 00:08:26.015 18.117 - 18.215: 99.8711% ( 1) 00:08:26.015 18.412 - 18.511: 99.8772% ( 1) 00:08:26.015 18.511 - 18.609: 99.8834% ( 1) 00:08:26.015 18.708 - 18.806: 99.8895% ( 1) 00:08:26.015 19.102 - 19.200: 99.8956% ( 1) 00:08:26.015 19.200 - 19.298: 99.9079% ( 2) 00:08:26.015 20.972 - 21.071: 99.9141% ( 1) 00:08:26.015 22.154 - 22.252: 99.9202% ( 1) 00:08:26.015 25.797 - 25.994: 99.9263% ( 1) 00:08:26.015 28.751 - 28.948: 99.9325% ( 1) 00:08:26.015 29.538 - 29.735: 99.9386% ( 1) 00:08:26.015 33.674 - 33.871: 99.9448% ( 1) 00:08:26.015 36.431 - 36.628: 99.9509% ( 1) 00:08:26.015 37.415 - 37.612: 99.9570% ( 1) 00:08:26.015 38.794 - 38.991: 99.9632% ( 1) 00:08:26.015 47.458 - 47.655: 99.9693% ( 1) 00:08:26.015 48.443 - 48.640: 99.9754% ( 1) 00:08:26.015 49.231 - 49.428: 99.9816% ( 1) 00:08:26.015 53.563 - 53.957: 99.9877% ( 1) 00:08:26.015 59.865 - 60.258: 99.9939% ( 1) 00:08:26.015 288.295 - 289.871: 100.0000% ( 1) 00:08:26.015 00:08:26.015 ************************************ 00:08:26.015 END TEST nvme_overhead 00:08:26.015 ************************************ 00:08:26.015 00:08:26.015 real 0m1.193s 00:08:26.015 user 0m1.053s 00:08:26.015 sys 0m0.095s 00:08:26.015 00:40:17 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.015 00:40:17 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:26.015 00:40:17 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:26.015 00:40:17 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:26.015 00:40:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.015 00:40:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.015 ************************************ 00:08:26.015 START TEST nvme_arbitration 00:08:26.015 ************************************ 00:08:26.015 00:40:17 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:29.294 Initializing NVMe Controllers 00:08:29.294 Attached to 0000:00:10.0 00:08:29.294 Attached to 0000:00:11.0 00:08:29.294 Attached to 0000:00:13.0 00:08:29.294 Attached to 0000:00:12.0 00:08:29.294 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:29.294 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:29.294 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:29.294 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:29.294 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:29.294 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:29.294 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:29.294 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:29.294 Initialization complete. Launching workers. 00:08:29.294 Starting thread on core 1 with urgent priority queue 00:08:29.294 Starting thread on core 2 with urgent priority queue 00:08:29.294 Starting thread on core 3 with urgent priority queue 00:08:29.294 Starting thread on core 0 with urgent priority queue 00:08:29.294 QEMU NVMe Ctrl (12340 ) core 0: 6400.00 IO/s 15.62 secs/100000 ios 00:08:29.294 QEMU NVMe Ctrl (12342 ) core 0: 6400.00 IO/s 15.62 secs/100000 ios 00:08:29.294 QEMU NVMe Ctrl (12341 ) core 1: 6101.33 IO/s 16.39 secs/100000 ios 00:08:29.294 QEMU NVMe Ctrl (12342 ) core 1: 6101.33 IO/s 16.39 secs/100000 ios 00:08:29.294 QEMU NVMe Ctrl (12343 ) core 2: 6577.67 IO/s 15.20 secs/100000 ios 00:08:29.294 QEMU NVMe Ctrl (12342 ) core 3: 6336.00 IO/s 15.78 secs/100000 ios 00:08:29.294 ======================================================== 00:08:29.294 00:08:29.294 00:08:29.294 real 0m3.221s 00:08:29.294 user 0m9.028s 00:08:29.294 sys 0m0.110s 00:08:29.294 00:40:21 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.294 00:40:21 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:29.294 ************************************ 00:08:29.294 END TEST nvme_arbitration 00:08:29.294 ************************************ 00:08:29.294 00:40:21 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:29.294 00:40:21 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:29.294 00:40:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.294 00:40:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.294 ************************************ 00:08:29.294 START TEST nvme_single_aen 00:08:29.294 ************************************ 00:08:29.294 00:40:21 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:29.294 Asynchronous Event Request test 00:08:29.294 Attached to 0000:00:10.0 00:08:29.294 Attached to 0000:00:11.0 00:08:29.294 Attached to 0000:00:13.0 00:08:29.294 Attached to 0000:00:12.0 00:08:29.294 Reset controller to setup AER completions for this process 00:08:29.294 Registering asynchronous event callbacks... 00:08:29.294 Getting orig temperature thresholds of all controllers 00:08:29.294 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.294 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.294 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.294 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.294 Setting all controllers temperature threshold low to trigger AER 00:08:29.294 Waiting for all controllers temperature threshold to be set lower 00:08:29.294 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.294 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:29.294 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.294 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:29.294 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.294 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:29.294 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.295 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:29.295 Waiting for all controllers to trigger AER and reset threshold 00:08:29.295 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.295 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.295 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.295 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.295 Cleaning up... 00:08:29.295 ************************************ 00:08:29.295 END TEST nvme_single_aen 00:08:29.295 ************************************ 00:08:29.295 00:08:29.295 real 0m0.198s 00:08:29.295 user 0m0.065s 00:08:29.295 sys 0m0.091s 00:08:29.295 00:40:21 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.295 00:40:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 00:40:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:29.553 00:40:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.553 00:40:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.553 00:40:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 ************************************ 00:08:29.553 START TEST nvme_doorbell_aers 00:08:29.553 ************************************ 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:29.553 00:40:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:29.811 [2024-11-17 00:40:21.626012] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:08:39.803 Executing: test_write_invalid_db 00:08:39.803 Waiting for AER completion... 00:08:39.803 Failure: test_write_invalid_db 00:08:39.803 00:08:39.803 Executing: test_invalid_db_write_overflow_sq 00:08:39.803 Waiting for AER completion... 00:08:39.803 Failure: test_invalid_db_write_overflow_sq 00:08:39.803 00:08:39.803 Executing: test_invalid_db_write_overflow_cq 00:08:39.803 Waiting for AER completion... 00:08:39.803 Failure: test_invalid_db_write_overflow_cq 00:08:39.803 00:08:39.803 00:40:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:39.803 00:40:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:39.803 [2024-11-17 00:40:31.675089] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:08:49.774 Executing: test_write_invalid_db 00:08:49.775 Waiting for AER completion... 00:08:49.775 Failure: test_write_invalid_db 00:08:49.775 00:08:49.775 Executing: test_invalid_db_write_overflow_sq 00:08:49.775 Waiting for AER completion... 00:08:49.775 Failure: test_invalid_db_write_overflow_sq 00:08:49.775 00:08:49.775 Executing: test_invalid_db_write_overflow_cq 00:08:49.775 Waiting for AER completion... 00:08:49.775 Failure: test_invalid_db_write_overflow_cq 00:08:49.775 00:08:49.775 00:40:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:49.775 00:40:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:49.775 [2024-11-17 00:40:41.688083] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:08:59.748 Executing: test_write_invalid_db 00:08:59.748 Waiting for AER completion... 00:08:59.748 Failure: test_write_invalid_db 00:08:59.748 00:08:59.748 Executing: test_invalid_db_write_overflow_sq 00:08:59.748 Waiting for AER completion... 00:08:59.748 Failure: test_invalid_db_write_overflow_sq 00:08:59.748 00:08:59.748 Executing: test_invalid_db_write_overflow_cq 00:08:59.748 Waiting for AER completion... 00:08:59.748 Failure: test_invalid_db_write_overflow_cq 00:08:59.748 00:08:59.748 00:40:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:59.748 00:40:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:59.748 [2024-11-17 00:40:51.723367] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 Executing: test_write_invalid_db 00:09:09.722 Waiting for AER completion... 00:09:09.722 Failure: test_write_invalid_db 00:09:09.722 00:09:09.722 Executing: test_invalid_db_write_overflow_sq 00:09:09.722 Waiting for AER completion... 00:09:09.722 Failure: test_invalid_db_write_overflow_sq 00:09:09.722 00:09:09.722 Executing: test_invalid_db_write_overflow_cq 00:09:09.722 Waiting for AER completion... 00:09:09.722 Failure: test_invalid_db_write_overflow_cq 00:09:09.722 00:09:09.722 00:09:09.722 real 0m40.181s 00:09:09.722 user 0m34.143s 00:09:09.722 sys 0m5.655s 00:09:09.722 00:41:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.722 00:41:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:09.722 ************************************ 00:09:09.722 END TEST nvme_doorbell_aers 00:09:09.722 ************************************ 00:09:09.722 00:41:01 nvme -- nvme/nvme.sh@97 -- # uname 00:09:09.722 00:41:01 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:09.722 00:41:01 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:09.722 00:41:01 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:09.722 00:41:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.722 00:41:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.722 ************************************ 00:09:09.722 START TEST nvme_multi_aen 00:09:09.722 ************************************ 00:09:09.722 00:41:01 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:09.722 [2024-11-17 00:41:01.756138] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.756200] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.756210] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.757545] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.757571] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.757579] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.758532] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.758554] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.758562] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.759718] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.759856] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 [2024-11-17 00:41:01.759906] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75551) is not found. Dropping the request. 00:09:09.722 Child process pid: 76072 00:09:10.000 [Child] Asynchronous Event Request test 00:09:10.000 [Child] Attached to 0000:00:10.0 00:09:10.000 [Child] Attached to 0000:00:11.0 00:09:10.000 [Child] Attached to 0000:00:13.0 00:09:10.000 [Child] Attached to 0000:00:12.0 00:09:10.000 [Child] Registering asynchronous event callbacks... 00:09:10.000 [Child] Getting orig temperature thresholds of all controllers 00:09:10.000 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:10.000 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 [Child] Cleaning up... 00:09:10.000 Asynchronous Event Request test 00:09:10.000 Attached to 0000:00:10.0 00:09:10.000 Attached to 0000:00:11.0 00:09:10.000 Attached to 0000:00:13.0 00:09:10.000 Attached to 0000:00:12.0 00:09:10.000 Reset controller to setup AER completions for this process 00:09:10.000 Registering asynchronous event callbacks... 00:09:10.000 Getting orig temperature thresholds of all controllers 00:09:10.000 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.000 Setting all controllers temperature threshold low to trigger AER 00:09:10.000 Waiting for all controllers temperature threshold to be set lower 00:09:10.000 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:10.000 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:10.000 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:10.000 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.000 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:10.000 Waiting for all controllers to trigger AER and reset threshold 00:09:10.000 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.000 Cleaning up... 00:09:10.000 00:09:10.000 real 0m0.388s 00:09:10.000 user 0m0.108s 00:09:10.000 sys 0m0.177s 00:09:10.000 00:41:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.000 00:41:02 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:10.000 ************************************ 00:09:10.000 END TEST nvme_multi_aen 00:09:10.000 ************************************ 00:09:10.000 00:41:02 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:10.000 00:41:02 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:10.000 00:41:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.000 00:41:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.000 ************************************ 00:09:10.000 START TEST nvme_startup 00:09:10.000 ************************************ 00:09:10.000 00:41:02 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:10.279 Initializing NVMe Controllers 00:09:10.279 Attached to 0000:00:10.0 00:09:10.279 Attached to 0000:00:11.0 00:09:10.279 Attached to 0000:00:13.0 00:09:10.279 Attached to 0000:00:12.0 00:09:10.279 Initialization complete. 00:09:10.279 Time used:118950.891 (us). 00:09:10.279 00:09:10.279 real 0m0.175s 00:09:10.279 user 0m0.053s 00:09:10.279 sys 0m0.078s 00:09:10.279 00:41:02 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.279 00:41:02 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:10.279 ************************************ 00:09:10.279 END TEST nvme_startup 00:09:10.279 ************************************ 00:09:10.279 00:41:02 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:10.279 00:41:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.279 00:41:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.279 00:41:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.279 ************************************ 00:09:10.279 START TEST nvme_multi_secondary 00:09:10.279 ************************************ 00:09:10.279 00:41:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:10.279 00:41:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76128 00:09:10.279 00:41:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:10.279 00:41:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76129 00:09:10.279 00:41:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:10.279 00:41:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:13.562 Initializing NVMe Controllers 00:09:13.562 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:13.562 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:13.562 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:13.562 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:13.562 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:13.562 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:13.562 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:13.562 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:13.562 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:13.562 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:13.562 Initialization complete. Launching workers. 00:09:13.562 ======================================================== 00:09:13.562 Latency(us) 00:09:13.562 Device Information : IOPS MiB/s Average min max 00:09:13.562 PCIE (0000:00:10.0) NSID 1 from core 2: 1688.94 6.60 9471.09 859.48 35077.90 00:09:13.562 PCIE (0000:00:11.0) NSID 1 from core 2: 1688.94 6.60 9472.91 851.28 30220.74 00:09:13.562 PCIE (0000:00:13.0) NSID 1 from core 2: 1688.94 6.60 9475.77 898.19 34028.46 00:09:13.562 PCIE (0000:00:12.0) NSID 1 from core 2: 1688.94 6.60 9475.65 895.06 32214.88 00:09:13.562 PCIE (0000:00:12.0) NSID 2 from core 2: 1688.94 6.60 9468.31 892.31 32246.87 00:09:13.562 PCIE (0000:00:12.0) NSID 3 from core 2: 1688.94 6.60 9464.86 882.45 27774.36 00:09:13.562 ======================================================== 00:09:13.562 Total : 10133.66 39.58 9471.43 851.28 35077.90 00:09:13.562 00:09:13.562 00:41:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76128 00:09:13.562 Initializing NVMe Controllers 00:09:13.563 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:13.563 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:13.563 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:13.563 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:13.563 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:13.563 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:13.563 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:13.563 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:13.563 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:13.563 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:13.563 Initialization complete. Launching workers. 00:09:13.563 ======================================================== 00:09:13.563 Latency(us) 00:09:13.563 Device Information : IOPS MiB/s Average min max 00:09:13.563 PCIE (0000:00:10.0) NSID 1 from core 1: 3840.94 15.00 4163.81 978.70 11023.00 00:09:13.563 PCIE (0000:00:11.0) NSID 1 from core 1: 3840.94 15.00 4165.66 1064.68 10011.86 00:09:13.563 PCIE (0000:00:13.0) NSID 1 from core 1: 3840.94 15.00 4165.53 1081.39 10762.47 00:09:13.563 PCIE (0000:00:12.0) NSID 1 from core 1: 3840.94 15.00 4166.10 1082.89 10769.21 00:09:13.563 PCIE (0000:00:12.0) NSID 2 from core 1: 3840.94 15.00 4166.24 1039.62 13005.50 00:09:13.563 PCIE (0000:00:12.0) NSID 3 from core 1: 3840.94 15.00 4166.09 1171.21 12397.55 00:09:13.563 ======================================================== 00:09:13.563 Total : 23045.66 90.02 4165.57 978.70 13005.50 00:09:13.563 00:09:15.477 Initializing NVMe Controllers 00:09:15.477 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.477 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.477 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.477 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.477 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:15.477 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:15.477 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:15.477 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:15.477 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:15.477 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:15.477 Initialization complete. Launching workers. 00:09:15.477 ======================================================== 00:09:15.477 Latency(us) 00:09:15.477 Device Information : IOPS MiB/s Average min max 00:09:15.477 PCIE (0000:00:10.0) NSID 1 from core 0: 5189.89 20.27 3081.37 744.72 9826.03 00:09:15.477 PCIE (0000:00:11.0) NSID 1 from core 0: 5189.89 20.27 3082.59 751.38 9852.60 00:09:15.477 PCIE (0000:00:13.0) NSID 1 from core 0: 5189.89 20.27 3082.55 768.03 10173.70 00:09:15.477 PCIE (0000:00:12.0) NSID 1 from core 0: 5189.89 20.27 3082.53 761.51 11337.69 00:09:15.477 PCIE (0000:00:12.0) NSID 2 from core 0: 5189.89 20.27 3082.48 761.13 9848.56 00:09:15.477 PCIE (0000:00:12.0) NSID 3 from core 0: 5189.89 20.27 3082.42 735.04 10846.06 00:09:15.477 ======================================================== 00:09:15.477 Total : 31139.34 121.64 3082.32 735.04 11337.69 00:09:15.477 00:09:15.738 00:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76129 00:09:15.738 00:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76199 00:09:15.738 00:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:15.738 00:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76200 00:09:15.738 00:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:15.738 00:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:19.038 Initializing NVMe Controllers 00:09:19.038 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.038 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.038 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.038 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.038 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:19.038 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:19.038 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:19.038 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:19.038 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:19.038 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:19.038 Initialization complete. Launching workers. 00:09:19.038 ======================================================== 00:09:19.038 Latency(us) 00:09:19.038 Device Information : IOPS MiB/s Average min max 00:09:19.038 PCIE (0000:00:10.0) NSID 1 from core 1: 3479.36 13.59 4596.65 1368.83 12252.78 00:09:19.038 PCIE (0000:00:11.0) NSID 1 from core 1: 3479.36 13.59 4599.15 1314.48 12865.84 00:09:19.038 PCIE (0000:00:13.0) NSID 1 from core 1: 3479.36 13.59 4599.43 1296.81 13651.03 00:09:19.038 PCIE (0000:00:12.0) NSID 1 from core 1: 3479.36 13.59 4599.26 1426.45 12677.17 00:09:19.038 PCIE (0000:00:12.0) NSID 2 from core 1: 3479.36 13.59 4599.09 1286.33 12201.38 00:09:19.038 PCIE (0000:00:12.0) NSID 3 from core 1: 3484.69 13.61 4591.90 1131.85 11316.76 00:09:19.038 ======================================================== 00:09:19.038 Total : 20881.51 81.57 4597.58 1131.85 13651.03 00:09:19.038 00:09:19.038 Initializing NVMe Controllers 00:09:19.038 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.038 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.038 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.038 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.038 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:19.038 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:19.038 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:19.038 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:19.038 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:19.038 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:19.038 Initialization complete. Launching workers. 00:09:19.038 ======================================================== 00:09:19.038 Latency(us) 00:09:19.038 Device Information : IOPS MiB/s Average min max 00:09:19.038 PCIE (0000:00:10.0) NSID 1 from core 0: 3629.23 14.18 4406.63 892.65 14450.97 00:09:19.038 PCIE (0000:00:11.0) NSID 1 from core 0: 3629.23 14.18 4407.73 895.84 12652.18 00:09:19.038 PCIE (0000:00:13.0) NSID 1 from core 0: 3629.23 14.18 4407.49 904.48 12715.40 00:09:19.038 PCIE (0000:00:12.0) NSID 1 from core 0: 3629.23 14.18 4407.26 891.89 11640.87 00:09:19.038 PCIE (0000:00:12.0) NSID 2 from core 0: 3629.23 14.18 4407.01 700.99 13815.69 00:09:19.038 PCIE (0000:00:12.0) NSID 3 from core 0: 3634.56 14.20 4400.28 631.40 14672.19 00:09:19.038 ======================================================== 00:09:19.038 Total : 21780.72 85.08 4406.07 631.40 14672.19 00:09:19.038 00:09:20.955 Initializing NVMe Controllers 00:09:20.955 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.955 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.955 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.955 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.955 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:20.955 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:20.955 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:20.955 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:20.955 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:20.955 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:20.955 Initialization complete. Launching workers. 00:09:20.955 ======================================================== 00:09:20.955 Latency(us) 00:09:20.955 Device Information : IOPS MiB/s Average min max 00:09:20.955 PCIE (0000:00:10.0) NSID 1 from core 2: 2053.06 8.02 7788.55 1111.49 30407.21 00:09:20.955 PCIE (0000:00:11.0) NSID 1 from core 2: 2053.06 8.02 7785.55 1144.75 32475.78 00:09:20.955 PCIE (0000:00:13.0) NSID 1 from core 2: 2053.06 8.02 7786.12 1014.28 31736.22 00:09:20.955 PCIE (0000:00:12.0) NSID 1 from core 2: 2053.06 8.02 7785.47 1141.83 32636.69 00:09:20.955 PCIE (0000:00:12.0) NSID 2 from core 2: 2053.06 8.02 7785.61 1227.60 28307.38 00:09:20.955 PCIE (0000:00:12.0) NSID 3 from core 2: 2053.06 8.02 7785.34 475.03 31864.17 00:09:20.955 ======================================================== 00:09:20.955 Total : 12318.34 48.12 7786.11 475.03 32636.69 00:09:20.955 00:09:20.955 ************************************ 00:09:20.955 END TEST nvme_multi_secondary 00:09:20.955 ************************************ 00:09:20.955 00:41:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76199 00:09:20.955 00:41:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76200 00:09:20.955 00:09:20.955 real 0m10.673s 00:09:20.955 user 0m18.161s 00:09:20.955 sys 0m0.813s 00:09:20.955 00:41:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.955 00:41:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:20.955 00:41:12 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:20.955 00:41:12 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:20.955 00:41:12 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75154 ]] 00:09:20.955 00:41:12 nvme -- common/autotest_common.sh@1090 -- # kill 75154 00:09:20.955 00:41:12 nvme -- common/autotest_common.sh@1091 -- # wait 75154 00:09:20.955 [2024-11-17 00:41:12.965041] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.955 [2024-11-17 00:41:12.965151] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.955 [2024-11-17 00:41:12.965177] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.965201] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.966102] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.966173] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.966196] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.966219] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.967109] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.967173] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.967194] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.967222] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.968078] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.968145] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.968169] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:20.956 [2024-11-17 00:41:12.968201] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76071) is not found. Dropping the request. 00:09:21.218 00:41:13 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:21.218 00:41:13 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:21.218 00:41:13 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:21.218 00:41:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.218 00:41:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.218 00:41:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.218 ************************************ 00:09:21.218 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:21.218 ************************************ 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:21.218 * Looking for test storage... 00:09:21.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.218 --rc genhtml_branch_coverage=1 00:09:21.218 --rc genhtml_function_coverage=1 00:09:21.218 --rc genhtml_legend=1 00:09:21.218 --rc geninfo_all_blocks=1 00:09:21.218 --rc geninfo_unexecuted_blocks=1 00:09:21.218 00:09:21.218 ' 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.218 --rc genhtml_branch_coverage=1 00:09:21.218 --rc genhtml_function_coverage=1 00:09:21.218 --rc genhtml_legend=1 00:09:21.218 --rc geninfo_all_blocks=1 00:09:21.218 --rc geninfo_unexecuted_blocks=1 00:09:21.218 00:09:21.218 ' 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.218 --rc genhtml_branch_coverage=1 00:09:21.218 --rc genhtml_function_coverage=1 00:09:21.218 --rc genhtml_legend=1 00:09:21.218 --rc geninfo_all_blocks=1 00:09:21.218 --rc geninfo_unexecuted_blocks=1 00:09:21.218 00:09:21.218 ' 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.218 --rc genhtml_branch_coverage=1 00:09:21.218 --rc genhtml_function_coverage=1 00:09:21.218 --rc genhtml_legend=1 00:09:21.218 --rc geninfo_all_blocks=1 00:09:21.218 --rc geninfo_unexecuted_blocks=1 00:09:21.218 00:09:21.218 ' 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.218 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:21.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76357 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76357 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76357 ']' 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.481 00:41:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.481 [2024-11-17 00:41:13.387199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:21.481 [2024-11-17 00:41:13.387629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76357 ] 00:09:21.743 [2024-11-17 00:41:13.561769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.743 [2024-11-17 00:41:13.639092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.743 [2024-11-17 00:41:13.639559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.743 [2024-11-17 00:41:13.639950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.743 [2024-11-17 00:41:13.640114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.315 nvme0n1 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_LLO4S.txt 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.315 true 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731804074 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76380 00:09:22.315 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:22.316 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:22.316 00:41:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:24.863 [2024-11-17 00:41:16.308970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:24.863 [2024-11-17 00:41:16.309196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:24.863 [2024-11-17 00:41:16.309214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:24.863 [2024-11-17 00:41:16.309227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:24.863 [2024-11-17 00:41:16.310735] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:24.863 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76380 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76380 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76380 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:24.863 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_LLO4S.txt 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_LLO4S.txt 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76357 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76357 ']' 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76357 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76357 00:09:24.864 killing process with pid 76357 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76357' 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76357 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76357 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:24.864 ************************************ 00:09:24.864 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:24.864 ************************************ 00:09:24.864 00:09:24.864 real 0m3.663s 00:09:24.864 user 0m12.612s 00:09:24.864 sys 0m0.642s 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.864 00:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:24.864 00:41:16 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:24.864 00:41:16 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:24.864 00:41:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.864 00:41:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.864 00:41:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.864 ************************************ 00:09:24.864 START TEST nvme_fio 00:09:24.864 ************************************ 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:24.864 00:41:16 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:24.864 00:41:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:25.125 00:41:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:25.125 00:41:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:25.387 00:41:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:25.387 00:41:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:25.387 00:41:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.648 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:25.648 fio-3.35 00:09:25.648 Starting 1 thread 00:09:30.932 00:09:30.932 test: (groupid=0, jobs=1): err= 0: pid=76513: Sun Nov 17 00:41:22 2024 00:09:30.932 read: IOPS=20.1k, BW=78.6MiB/s (82.5MB/s)(157MiB/2001msec) 00:09:30.932 slat (nsec): min=3345, max=77803, avg=5305.99, stdev=2366.51 00:09:30.932 clat (usec): min=249, max=9112, avg=3163.35, stdev=972.60 00:09:30.932 lat (usec): min=253, max=9118, avg=3168.65, stdev=973.68 00:09:30.932 clat percentiles (usec): 00:09:30.932 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:30.932 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2933], 00:09:30.932 | 70.00th=[ 3130], 80.00th=[ 3556], 90.00th=[ 4621], 95.00th=[ 5407], 00:09:30.932 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 7963], 00:09:30.932 | 99.99th=[ 8717] 00:09:30.932 bw ( KiB/s): min=74192, max=87744, per=98.56%, avg=79381.33, stdev=7312.09, samples=3 00:09:30.932 iops : min=18548, max=21936, avg=19845.33, stdev=1828.02, samples=3 00:09:30.932 write: IOPS=20.1k, BW=78.5MiB/s (82.3MB/s)(157MiB/2001msec); 0 zone resets 00:09:30.932 slat (nsec): min=3479, max=74614, avg=5490.02, stdev=2434.03 00:09:30.932 clat (usec): min=203, max=9184, avg=3179.64, stdev=967.14 00:09:30.932 lat (usec): min=208, max=9201, avg=3185.13, stdev=968.19 00:09:30.932 clat percentiles (usec): 00:09:30.932 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2573], 00:09:30.932 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:09:30.932 | 70.00th=[ 3163], 80.00th=[ 3556], 90.00th=[ 4621], 95.00th=[ 5407], 00:09:30.932 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7701], 99.95th=[ 7832], 00:09:30.932 | 99.99th=[ 8717] 00:09:30.932 bw ( KiB/s): min=74248, max=87808, per=98.85%, avg=79410.67, stdev=7335.92, samples=3 00:09:30.932 iops : min=18562, max=21952, avg=19852.67, stdev=1833.98, samples=3 00:09:30.932 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:30.932 lat (msec) : 2=1.04%, 4=83.83%, 10=15.07% 00:09:30.932 cpu : usr=99.00%, sys=0.10%, ctx=3, majf=0, minf=627 00:09:30.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:30.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.932 issued rwts: total=40289,40189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.932 00:09:30.932 Run status group 0 (all jobs): 00:09:30.932 READ: bw=78.6MiB/s (82.5MB/s), 78.6MiB/s-78.6MiB/s (82.5MB/s-82.5MB/s), io=157MiB (165MB), run=2001-2001msec 00:09:30.933 WRITE: bw=78.5MiB/s (82.3MB/s), 78.5MiB/s-78.5MiB/s (82.3MB/s-82.3MB/s), io=157MiB (165MB), run=2001-2001msec 00:09:30.933 ----------------------------------------------------- 00:09:30.933 Suppressions used: 00:09:30.933 count bytes template 00:09:30.933 1 32 /usr/src/fio/parse.c 00:09:30.933 1 8 libtcmalloc_minimal.so 00:09:30.933 ----------------------------------------------------- 00:09:30.933 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:30.933 00:41:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:30.933 00:41:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:30.933 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:30.933 fio-3.35 00:09:30.933 Starting 1 thread 00:09:37.518 00:09:37.518 test: (groupid=0, jobs=1): err= 0: pid=76570: Sun Nov 17 00:41:29 2024 00:09:37.519 read: IOPS=20.2k, BW=79.1MiB/s (82.9MB/s)(158MiB/2001msec) 00:09:37.519 slat (usec): min=4, max=188, avg= 5.99, stdev= 2.70 00:09:37.519 clat (usec): min=440, max=10172, avg=3137.37, stdev=981.90 00:09:37.519 lat (usec): min=447, max=10215, avg=3143.36, stdev=983.28 00:09:37.519 clat percentiles (usec): 00:09:37.519 | 1.00th=[ 2278], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:09:37.519 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2900], 00:09:37.519 | 70.00th=[ 3032], 80.00th=[ 3261], 90.00th=[ 4359], 95.00th=[ 5407], 00:09:37.519 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 8717], 99.95th=[ 9110], 00:09:37.519 | 99.99th=[10028] 00:09:37.519 bw ( KiB/s): min=74736, max=87088, per=100.00%, avg=82722.67, stdev=6926.65, samples=3 00:09:37.519 iops : min=18684, max=21772, avg=20680.67, stdev=1731.66, samples=3 00:09:37.519 write: IOPS=20.2k, BW=78.9MiB/s (82.7MB/s)(158MiB/2001msec); 0 zone resets 00:09:37.519 slat (nsec): min=4891, max=59128, avg=6346.93, stdev=2563.62 00:09:37.519 clat (usec): min=460, max=10091, avg=3171.49, stdev=1010.41 00:09:37.519 lat (usec): min=466, max=10105, avg=3177.84, stdev=1011.83 00:09:37.519 clat percentiles (usec): 00:09:37.519 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:09:37.519 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2933], 00:09:37.519 | 70.00th=[ 3064], 80.00th=[ 3294], 90.00th=[ 4490], 95.00th=[ 5604], 00:09:37.519 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8717], 99.95th=[ 9241], 00:09:37.519 | 99.99th=[ 9896] 00:09:37.519 bw ( KiB/s): min=75320, max=87040, per=100.00%, avg=82778.67, stdev=6481.27, samples=3 00:09:37.519 iops : min=18830, max=21760, avg=20694.67, stdev=1620.32, samples=3 00:09:37.519 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:37.519 lat (msec) : 2=0.15%, 4=87.49%, 10=12.33%, 20=0.01% 00:09:37.519 cpu : usr=99.20%, sys=0.05%, ctx=2, majf=0, minf=626 00:09:37.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:37.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.519 issued rwts: total=40504,40415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.519 00:09:37.519 Run status group 0 (all jobs): 00:09:37.519 READ: bw=79.1MiB/s (82.9MB/s), 79.1MiB/s-79.1MiB/s (82.9MB/s-82.9MB/s), io=158MiB (166MB), run=2001-2001msec 00:09:37.519 WRITE: bw=78.9MiB/s (82.7MB/s), 78.9MiB/s-78.9MiB/s (82.7MB/s-82.7MB/s), io=158MiB (166MB), run=2001-2001msec 00:09:37.780 ----------------------------------------------------- 00:09:37.780 Suppressions used: 00:09:37.780 count bytes template 00:09:37.780 1 32 /usr/src/fio/parse.c 00:09:37.780 1 8 libtcmalloc_minimal.so 00:09:37.780 ----------------------------------------------------- 00:09:37.780 00:09:37.780 00:41:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:37.780 00:41:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:37.780 00:41:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:37.780 00:41:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:38.042 00:41:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:38.042 00:41:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:38.042 00:41:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:38.042 00:41:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:38.042 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:38.304 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:38.304 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:38.304 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:38.304 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:38.304 00:41:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:38.304 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:38.304 fio-3.35 00:09:38.304 Starting 1 thread 00:09:44.899 00:09:44.899 test: (groupid=0, jobs=1): err= 0: pid=76636: Sun Nov 17 00:41:35 2024 00:09:44.899 read: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(127MiB/2001msec) 00:09:44.899 slat (nsec): min=4886, max=81780, avg=7058.77, stdev=3323.63 00:09:44.899 clat (usec): min=209, max=12344, avg=3900.21, stdev=1285.55 00:09:44.899 lat (usec): min=214, max=12388, avg=3907.27, stdev=1287.02 00:09:44.899 clat percentiles (usec): 00:09:44.899 | 1.00th=[ 2409], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2933], 00:09:44.899 | 30.00th=[ 3130], 40.00th=[ 3294], 50.00th=[ 3458], 60.00th=[ 3654], 00:09:44.899 | 70.00th=[ 4015], 80.00th=[ 4817], 90.00th=[ 5800], 95.00th=[ 6718], 00:09:44.899 | 99.00th=[ 7963], 99.50th=[ 8291], 99.90th=[ 9241], 99.95th=[11076], 00:09:44.899 | 99.99th=[12256] 00:09:44.899 bw ( KiB/s): min=55256, max=67080, per=93.40%, avg=60920.00, stdev=5927.58, samples=3 00:09:44.899 iops : min=13814, max=16770, avg=15230.00, stdev=1481.90, samples=3 00:09:44.899 write: IOPS=16.3k, BW=63.8MiB/s (66.9MB/s)(128MiB/2001msec); 0 zone resets 00:09:44.899 slat (nsec): min=5028, max=68547, avg=7508.73, stdev=3317.26 00:09:44.899 clat (usec): min=199, max=12265, avg=3913.51, stdev=1281.80 00:09:44.899 lat (usec): min=205, max=12280, avg=3921.02, stdev=1283.27 00:09:44.899 clat percentiles (usec): 00:09:44.899 | 1.00th=[ 2409], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2966], 00:09:44.899 | 30.00th=[ 3163], 40.00th=[ 3326], 50.00th=[ 3490], 60.00th=[ 3687], 00:09:44.899 | 70.00th=[ 4015], 80.00th=[ 4817], 90.00th=[ 5866], 95.00th=[ 6718], 00:09:44.899 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9503], 99.95th=[11207], 00:09:44.899 | 99.99th=[12125] 00:09:44.899 bw ( KiB/s): min=55600, max=66392, per=92.59%, avg=60536.00, stdev=5454.50, samples=3 00:09:44.899 iops : min=13900, max=16598, avg=15134.00, stdev=1363.63, samples=3 00:09:44.899 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:44.899 lat (msec) : 2=0.18%, 4=69.50%, 10=30.19%, 20=0.07% 00:09:44.899 cpu : usr=98.55%, sys=0.35%, ctx=17, majf=0, minf=626 00:09:44.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:44.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.899 issued rwts: total=32629,32705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.899 00:09:44.899 Run status group 0 (all jobs): 00:09:44.899 READ: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=127MiB (134MB), run=2001-2001msec 00:09:44.899 WRITE: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=128MiB (134MB), run=2001-2001msec 00:09:44.899 ----------------------------------------------------- 00:09:44.899 Suppressions used: 00:09:44.899 count bytes template 00:09:44.899 1 32 /usr/src/fio/parse.c 00:09:44.899 1 8 libtcmalloc_minimal.so 00:09:44.899 ----------------------------------------------------- 00:09:44.899 00:09:44.899 00:41:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:44.899 00:41:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:44.899 00:41:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:44.899 00:41:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:44.899 00:41:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:44.899 00:41:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:44.899 00:41:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:44.899 00:41:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:44.899 00:41:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:44.899 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:44.899 fio-3.35 00:09:44.899 Starting 1 thread 00:09:50.191 00:09:50.191 test: (groupid=0, jobs=1): err= 0: pid=76693: Sun Nov 17 00:41:41 2024 00:09:50.191 read: IOPS=17.6k, BW=68.6MiB/s (72.0MB/s)(137MiB/2001msec) 00:09:50.191 slat (nsec): min=4843, max=82108, avg=6676.18, stdev=3155.65 00:09:50.191 clat (usec): min=203, max=13854, avg=3616.64, stdev=1264.52 00:09:50.191 lat (usec): min=209, max=13921, avg=3623.31, stdev=1266.07 00:09:50.191 clat percentiles (usec): 00:09:50.191 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2671], 00:09:50.191 | 30.00th=[ 2802], 40.00th=[ 2966], 50.00th=[ 3163], 60.00th=[ 3392], 00:09:50.191 | 70.00th=[ 3720], 80.00th=[ 4424], 90.00th=[ 5538], 95.00th=[ 6456], 00:09:50.191 | 99.00th=[ 7635], 99.50th=[ 8029], 99.90th=[ 9110], 99.95th=[11338], 00:09:50.191 | 99.99th=[13566] 00:09:50.191 bw ( KiB/s): min=59936, max=83320, per=100.00%, avg=72664.00, stdev=11828.89, samples=3 00:09:50.191 iops : min=14984, max=20830, avg=18166.00, stdev=2957.22, samples=3 00:09:50.191 write: IOPS=17.6k, BW=68.7MiB/s (72.0MB/s)(137MiB/2001msec); 0 zone resets 00:09:50.191 slat (usec): min=4, max=567, avg= 7.07, stdev= 4.41 00:09:50.191 clat (usec): min=223, max=13646, avg=3637.82, stdev=1271.79 00:09:50.191 lat (usec): min=231, max=13662, avg=3644.89, stdev=1273.40 00:09:50.191 clat percentiles (usec): 00:09:50.191 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2704], 00:09:50.191 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3195], 60.00th=[ 3392], 00:09:50.191 | 70.00th=[ 3752], 80.00th=[ 4424], 90.00th=[ 5604], 95.00th=[ 6456], 00:09:50.191 | 99.00th=[ 7767], 99.50th=[ 8094], 99.90th=[ 9372], 99.95th=[11600], 00:09:50.191 | 99.99th=[13304] 00:09:50.191 bw ( KiB/s): min=60328, max=83184, per=100.00%, avg=72592.00, stdev=11519.37, samples=3 00:09:50.191 iops : min=15082, max=20796, avg=18148.00, stdev=2879.84, samples=3 00:09:50.191 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:50.191 lat (msec) : 2=0.11%, 4=74.19%, 10=25.59%, 20=0.08% 00:09:50.191 cpu : usr=98.80%, sys=0.20%, ctx=4, majf=0, minf=624 00:09:50.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:50.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.191 issued rwts: total=35161,35187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.191 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.191 00:09:50.191 Run status group 0 (all jobs): 00:09:50.191 READ: bw=68.6MiB/s (72.0MB/s), 68.6MiB/s-68.6MiB/s (72.0MB/s-72.0MB/s), io=137MiB (144MB), run=2001-2001msec 00:09:50.191 WRITE: bw=68.7MiB/s (72.0MB/s), 68.7MiB/s-68.7MiB/s (72.0MB/s-72.0MB/s), io=137MiB (144MB), run=2001-2001msec 00:09:50.191 ----------------------------------------------------- 00:09:50.191 Suppressions used: 00:09:50.191 count bytes template 00:09:50.191 1 32 /usr/src/fio/parse.c 00:09:50.191 1 8 libtcmalloc_minimal.so 00:09:50.191 ----------------------------------------------------- 00:09:50.191 00:09:50.191 00:41:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:50.191 00:41:41 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:50.191 00:09:50.191 real 0m24.690s 00:09:50.191 user 0m15.887s 00:09:50.191 sys 0m15.278s 00:09:50.191 00:41:41 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.191 00:41:41 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:50.191 ************************************ 00:09:50.191 END TEST nvme_fio 00:09:50.191 ************************************ 00:09:50.191 00:09:50.191 real 1m33.059s 00:09:50.191 user 3m31.898s 00:09:50.191 sys 0m26.090s 00:09:50.191 00:41:41 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.191 00:41:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.191 ************************************ 00:09:50.191 END TEST nvme 00:09:50.191 ************************************ 00:09:50.191 00:41:41 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:50.191 00:41:41 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:50.191 00:41:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.191 00:41:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.191 00:41:41 -- common/autotest_common.sh@10 -- # set +x 00:09:50.191 ************************************ 00:09:50.192 START TEST nvme_scc 00:09:50.192 ************************************ 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:50.192 * Looking for test storage... 00:09:50.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.192 --rc genhtml_branch_coverage=1 00:09:50.192 --rc genhtml_function_coverage=1 00:09:50.192 --rc genhtml_legend=1 00:09:50.192 --rc geninfo_all_blocks=1 00:09:50.192 --rc geninfo_unexecuted_blocks=1 00:09:50.192 00:09:50.192 ' 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.192 --rc genhtml_branch_coverage=1 00:09:50.192 --rc genhtml_function_coverage=1 00:09:50.192 --rc genhtml_legend=1 00:09:50.192 --rc geninfo_all_blocks=1 00:09:50.192 --rc geninfo_unexecuted_blocks=1 00:09:50.192 00:09:50.192 ' 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.192 --rc genhtml_branch_coverage=1 00:09:50.192 --rc genhtml_function_coverage=1 00:09:50.192 --rc genhtml_legend=1 00:09:50.192 --rc geninfo_all_blocks=1 00:09:50.192 --rc geninfo_unexecuted_blocks=1 00:09:50.192 00:09:50.192 ' 00:09:50.192 00:41:41 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.192 --rc genhtml_branch_coverage=1 00:09:50.192 --rc genhtml_function_coverage=1 00:09:50.192 --rc genhtml_legend=1 00:09:50.192 --rc geninfo_all_blocks=1 00:09:50.192 --rc geninfo_unexecuted_blocks=1 00:09:50.192 00:09:50.192 ' 00:09:50.192 00:41:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.192 00:41:41 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.192 00:41:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.192 00:41:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.192 00:41:41 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.192 00:41:41 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:50.192 00:41:41 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:50.192 00:41:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:50.192 00:41:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.192 00:41:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:50.192 00:41:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:50.192 00:41:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:50.192 00:41:41 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:50.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.192 Waiting for block devices as requested 00:09:50.449 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.449 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.449 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.449 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.737 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:55.737 00:41:47 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:55.737 00:41:47 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:55.737 00:41:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.737 00:41:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:55.737 00:41:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:55.737 00:41:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:55.737 00:41:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.738 00:41:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:55.738 00:41:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.738 00:41:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:55.738 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.739 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.740 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:55.741 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:55.742 00:41:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.742 00:41:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:55.742 00:41:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.742 00:41:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.742 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.743 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:55.744 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.745 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.746 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:55.747 00:41:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.747 00:41:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:55.747 00:41:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.747 00:41:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.747 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:55.748 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.749 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:55.750 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.751 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:55.752 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:55.753 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.754 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:55.755 00:41:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.755 00:41:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:55.755 00:41:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.755 00:41:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.755 00:41:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:56.017 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.018 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.019 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:56.020 00:41:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:56.020 00:41:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:56.021 00:41:47 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:56.021 00:41:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:56.021 00:41:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:56.021 00:41:47 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:56.854 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.854 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.854 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.115 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.115 00:41:49 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:57.115 00:41:49 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:57.115 00:41:49 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.115 00:41:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:57.115 ************************************ 00:09:57.115 START TEST nvme_simple_copy 00:09:57.115 ************************************ 00:09:57.115 00:41:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:57.374 Initializing NVMe Controllers 00:09:57.374 Attaching to 0000:00:10.0 00:09:57.374 Controller supports SCC. Attached to 0000:00:10.0 00:09:57.374 Namespace ID: 1 size: 6GB 00:09:57.374 Initialization complete. 00:09:57.374 00:09:57.374 Controller QEMU NVMe Ctrl (12340 ) 00:09:57.374 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:57.374 Namespace Block Size:4096 00:09:57.374 Writing LBAs 0 to 63 with Random Data 00:09:57.374 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:57.374 LBAs matching Written Data: 64 00:09:57.374 00:09:57.374 real 0m0.272s 00:09:57.374 user 0m0.095s 00:09:57.374 sys 0m0.075s 00:09:57.374 00:41:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.374 ************************************ 00:09:57.374 END TEST nvme_simple_copy 00:09:57.374 00:41:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:57.374 ************************************ 00:09:57.374 00:09:57.374 real 0m7.768s 00:09:57.374 user 0m1.014s 00:09:57.374 sys 0m1.466s 00:09:57.374 00:41:49 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.374 00:41:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:57.374 ************************************ 00:09:57.374 END TEST nvme_scc 00:09:57.374 ************************************ 00:09:57.374 00:41:49 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:57.374 00:41:49 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:57.374 00:41:49 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:57.374 00:41:49 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:57.374 00:41:49 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:57.374 00:41:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:57.634 00:41:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.634 00:41:49 -- common/autotest_common.sh@10 -- # set +x 00:09:57.634 ************************************ 00:09:57.634 START TEST nvme_fdp 00:09:57.634 ************************************ 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:57.634 * Looking for test storage... 00:09:57.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.634 --rc genhtml_branch_coverage=1 00:09:57.634 --rc genhtml_function_coverage=1 00:09:57.634 --rc genhtml_legend=1 00:09:57.634 --rc geninfo_all_blocks=1 00:09:57.634 --rc geninfo_unexecuted_blocks=1 00:09:57.634 00:09:57.634 ' 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.634 --rc genhtml_branch_coverage=1 00:09:57.634 --rc genhtml_function_coverage=1 00:09:57.634 --rc genhtml_legend=1 00:09:57.634 --rc geninfo_all_blocks=1 00:09:57.634 --rc geninfo_unexecuted_blocks=1 00:09:57.634 00:09:57.634 ' 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.634 --rc genhtml_branch_coverage=1 00:09:57.634 --rc genhtml_function_coverage=1 00:09:57.634 --rc genhtml_legend=1 00:09:57.634 --rc geninfo_all_blocks=1 00:09:57.634 --rc geninfo_unexecuted_blocks=1 00:09:57.634 00:09:57.634 ' 00:09:57.634 00:41:49 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.634 --rc genhtml_branch_coverage=1 00:09:57.634 --rc genhtml_function_coverage=1 00:09:57.634 --rc genhtml_legend=1 00:09:57.634 --rc geninfo_all_blocks=1 00:09:57.634 --rc geninfo_unexecuted_blocks=1 00:09:57.634 00:09:57.634 ' 00:09:57.634 00:41:49 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.634 00:41:49 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.634 00:41:49 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.634 00:41:49 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.634 00:41:49 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.634 00:41:49 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:57.634 00:41:49 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:57.634 00:41:49 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:57.634 00:41:49 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.634 00:41:49 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:57.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.157 Waiting for block devices as requested 00:09:58.157 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.418 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.418 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.418 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.723 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:03.723 00:41:55 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:03.723 00:41:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.723 00:41:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:03.723 00:41:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.723 00:41:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.723 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.724 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.725 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.726 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:03.727 00:41:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.727 00:41:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:03.727 00:41:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.727 00:41:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.727 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.728 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:03.729 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.730 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.731 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:03.732 00:41:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.732 00:41:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:03.732 00:41:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.732 00:41:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.732 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.733 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.734 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.735 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.736 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.737 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:03.738 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:03.739 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:03.740 00:41:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.740 00:41:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:03.740 00:41:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.740 00:41:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.740 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.741 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:03.742 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:03.743 00:41:55 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:03.743 00:41:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:04.005 00:41:55 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:04.005 00:41:55 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:04.005 00:41:55 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:04.005 00:41:55 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:04.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.214 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.214 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.214 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.214 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.214 00:41:57 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:05.214 00:41:57 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:05.214 00:41:57 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.214 00:41:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:05.214 ************************************ 00:10:05.214 START TEST nvme_flexible_data_placement 00:10:05.214 ************************************ 00:10:05.214 00:41:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:05.214 Initializing NVMe Controllers 00:10:05.214 Attaching to 0000:00:13.0 00:10:05.214 Controller supports FDP Attached to 0000:00:13.0 00:10:05.214 Namespace ID: 1 Endurance Group ID: 1 00:10:05.214 Initialization complete. 00:10:05.214 00:10:05.214 ================================== 00:10:05.214 == FDP tests for Namespace: #01 == 00:10:05.214 ================================== 00:10:05.214 00:10:05.214 Get Feature: FDP: 00:10:05.214 ================= 00:10:05.214 Enabled: Yes 00:10:05.214 FDP configuration Index: 0 00:10:05.214 00:10:05.214 FDP configurations log page 00:10:05.214 =========================== 00:10:05.214 Number of FDP configurations: 1 00:10:05.214 Version: 0 00:10:05.214 Size: 112 00:10:05.214 FDP Configuration Descriptor: 0 00:10:05.214 Descriptor Size: 96 00:10:05.214 Reclaim Group Identifier format: 2 00:10:05.214 FDP Volatile Write Cache: Not Present 00:10:05.214 FDP Configuration: Valid 00:10:05.214 Vendor Specific Size: 0 00:10:05.214 Number of Reclaim Groups: 2 00:10:05.214 Number of Recalim Unit Handles: 8 00:10:05.214 Max Placement Identifiers: 128 00:10:05.214 Number of Namespaces Suppprted: 256 00:10:05.214 Reclaim unit Nominal Size: 6000000 bytes 00:10:05.214 Estimated Reclaim Unit Time Limit: Not Reported 00:10:05.214 RUH Desc #000: RUH Type: Initially Isolated 00:10:05.214 RUH Desc #001: RUH Type: Initially Isolated 00:10:05.214 RUH Desc #002: RUH Type: Initially Isolated 00:10:05.214 RUH Desc #003: RUH Type: Initially Isolated 00:10:05.214 RUH Desc #004: RUH Type: Initially Isolated 00:10:05.214 RUH Desc #005: RUH Type: Initially Isolated 00:10:05.214 RUH Desc #006: RUH Type: Initially Isolated 00:10:05.214 RUH Desc #007: RUH Type: Initially Isolated 00:10:05.214 00:10:05.214 FDP reclaim unit handle usage log page 00:10:05.214 ====================================== 00:10:05.214 Number of Reclaim Unit Handles: 8 00:10:05.214 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:05.214 RUH Usage Desc #001: RUH Attributes: Unused 00:10:05.214 RUH Usage Desc #002: RUH Attributes: Unused 00:10:05.214 RUH Usage Desc #003: RUH Attributes: Unused 00:10:05.214 RUH Usage Desc #004: RUH Attributes: Unused 00:10:05.214 RUH Usage Desc #005: RUH Attributes: Unused 00:10:05.214 RUH Usage Desc #006: RUH Attributes: Unused 00:10:05.214 RUH Usage Desc #007: RUH Attributes: Unused 00:10:05.214 00:10:05.214 FDP statistics log page 00:10:05.214 ======================= 00:10:05.214 Host bytes with metadata written: 2094194688 00:10:05.214 Media bytes with metadata written: 2094555136 00:10:05.214 Media bytes erased: 0 00:10:05.214 00:10:05.214 FDP Reclaim unit handle status 00:10:05.214 ============================== 00:10:05.214 Number of RUHS descriptors: 2 00:10:05.214 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000012d2 00:10:05.214 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:05.214 00:10:05.214 FDP write on placement id: 0 success 00:10:05.214 00:10:05.214 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:05.214 00:10:05.214 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:05.214 00:10:05.214 Get Feature: FDP Events for Placement handle: #0 00:10:05.214 ======================== 00:10:05.214 Number of FDP Events: 6 00:10:05.214 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:05.214 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:05.214 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:05.214 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:05.214 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:05.214 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:05.214 00:10:05.214 FDP events log page 00:10:05.214 =================== 00:10:05.215 Number of FDP events: 1 00:10:05.215 FDP Event #0: 00:10:05.215 Event Type: RU Not Written to Capacity 00:10:05.215 Placement Identifier: Valid 00:10:05.215 NSID: Valid 00:10:05.215 Location: Valid 00:10:05.215 Placement Identifier: 0 00:10:05.215 Event Timestamp: 6 00:10:05.215 Namespace Identifier: 1 00:10:05.215 Reclaim Group Identifier: 0 00:10:05.215 Reclaim Unit Handle Identifier: 0 00:10:05.215 00:10:05.215 FDP test passed 00:10:05.476 00:10:05.476 real 0m0.233s 00:10:05.476 user 0m0.057s 00:10:05.476 sys 0m0.074s 00:10:05.477 00:41:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.477 ************************************ 00:10:05.477 END TEST nvme_flexible_data_placement 00:10:05.477 ************************************ 00:10:05.477 00:41:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:05.477 00:10:05.477 real 0m7.882s 00:10:05.477 user 0m1.036s 00:10:05.477 sys 0m1.572s 00:10:05.477 ************************************ 00:10:05.477 END TEST nvme_fdp 00:10:05.477 ************************************ 00:10:05.477 00:41:57 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.477 00:41:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:05.477 00:41:57 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:05.477 00:41:57 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:05.477 00:41:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.477 00:41:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.477 00:41:57 -- common/autotest_common.sh@10 -- # set +x 00:10:05.477 ************************************ 00:10:05.477 START TEST nvme_rpc 00:10:05.477 ************************************ 00:10:05.477 00:41:57 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:05.477 * Looking for test storage... 00:10:05.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:05.477 00:41:57 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:05.477 00:41:57 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:05.477 00:41:57 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.739 00:41:57 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.739 --rc genhtml_branch_coverage=1 00:10:05.739 --rc genhtml_function_coverage=1 00:10:05.739 --rc genhtml_legend=1 00:10:05.739 --rc geninfo_all_blocks=1 00:10:05.739 --rc geninfo_unexecuted_blocks=1 00:10:05.739 00:10:05.739 ' 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.739 --rc genhtml_branch_coverage=1 00:10:05.739 --rc genhtml_function_coverage=1 00:10:05.739 --rc genhtml_legend=1 00:10:05.739 --rc geninfo_all_blocks=1 00:10:05.739 --rc geninfo_unexecuted_blocks=1 00:10:05.739 00:10:05.739 ' 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.739 --rc genhtml_branch_coverage=1 00:10:05.739 --rc genhtml_function_coverage=1 00:10:05.739 --rc genhtml_legend=1 00:10:05.739 --rc geninfo_all_blocks=1 00:10:05.739 --rc geninfo_unexecuted_blocks=1 00:10:05.739 00:10:05.739 ' 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:05.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.739 --rc genhtml_branch_coverage=1 00:10:05.739 --rc genhtml_function_coverage=1 00:10:05.739 --rc genhtml_legend=1 00:10:05.739 --rc geninfo_all_blocks=1 00:10:05.739 --rc geninfo_unexecuted_blocks=1 00:10:05.739 00:10:05.739 ' 00:10:05.739 00:41:57 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.739 00:41:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:05.739 00:41:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:05.739 00:41:57 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78055 00:10:05.739 00:41:57 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:05.739 00:41:57 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78055 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78055 ']' 00:10:05.739 00:41:57 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.739 00:41:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.739 [2024-11-17 00:41:57.754989] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:05.739 [2024-11-17 00:41:57.755201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78055 ] 00:10:06.000 [2024-11-17 00:41:57.923216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.000 [2024-11-17 00:41:57.997184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.000 [2024-11-17 00:41:57.997245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.572 00:41:58 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.572 00:41:58 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:06.572 00:41:58 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:07.145 Nvme0n1 00:10:07.145 00:41:58 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:07.145 00:41:58 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:07.145 request: 00:10:07.145 { 00:10:07.145 "bdev_name": "Nvme0n1", 00:10:07.145 "filename": "non_existing_file", 00:10:07.145 "method": "bdev_nvme_apply_firmware", 00:10:07.145 "req_id": 1 00:10:07.145 } 00:10:07.145 Got JSON-RPC error response 00:10:07.145 response: 00:10:07.145 { 00:10:07.145 "code": -32603, 00:10:07.145 "message": "open file failed." 00:10:07.145 } 00:10:07.145 00:41:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:07.145 00:41:59 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:07.145 00:41:59 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:07.404 00:41:59 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:07.404 00:41:59 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78055 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78055 ']' 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78055 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78055 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.404 killing process with pid 78055 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78055' 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78055 00:10:07.404 00:41:59 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78055 00:10:07.665 00:10:07.665 real 0m2.317s 00:10:07.665 user 0m4.219s 00:10:07.665 sys 0m0.741s 00:10:07.665 ************************************ 00:10:07.665 END TEST nvme_rpc 00:10:07.665 00:41:59 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.665 00:41:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.665 ************************************ 00:10:07.928 00:41:59 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:07.928 00:41:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:07.928 00:41:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.928 00:41:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.928 ************************************ 00:10:07.928 START TEST nvme_rpc_timeouts 00:10:07.928 ************************************ 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:07.928 * Looking for test storage... 00:10:07.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.928 00:41:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.928 --rc genhtml_branch_coverage=1 00:10:07.928 --rc genhtml_function_coverage=1 00:10:07.928 --rc genhtml_legend=1 00:10:07.928 --rc geninfo_all_blocks=1 00:10:07.928 --rc geninfo_unexecuted_blocks=1 00:10:07.928 00:10:07.928 ' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.928 --rc genhtml_branch_coverage=1 00:10:07.928 --rc genhtml_function_coverage=1 00:10:07.928 --rc genhtml_legend=1 00:10:07.928 --rc geninfo_all_blocks=1 00:10:07.928 --rc geninfo_unexecuted_blocks=1 00:10:07.928 00:10:07.928 ' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.928 --rc genhtml_branch_coverage=1 00:10:07.928 --rc genhtml_function_coverage=1 00:10:07.928 --rc genhtml_legend=1 00:10:07.928 --rc geninfo_all_blocks=1 00:10:07.928 --rc geninfo_unexecuted_blocks=1 00:10:07.928 00:10:07.928 ' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.928 --rc genhtml_branch_coverage=1 00:10:07.928 --rc genhtml_function_coverage=1 00:10:07.928 --rc genhtml_legend=1 00:10:07.928 --rc geninfo_all_blocks=1 00:10:07.928 --rc geninfo_unexecuted_blocks=1 00:10:07.928 00:10:07.928 ' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.928 00:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78115 00:10:07.928 00:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78115 00:10:07.928 00:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78147 00:10:07.928 00:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:07.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.928 00:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78147 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78147 ']' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.928 00:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:07.928 00:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:08.191 [2024-11-17 00:42:00.023499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.191 [2024-11-17 00:42:00.023680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78147 ] 00:10:08.191 [2024-11-17 00:42:00.179237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.452 [2024-11-17 00:42:00.253688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.452 [2024-11-17 00:42:00.253792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.024 00:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.024 Checking default timeout settings: 00:10:09.024 00:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:09.024 00:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:09.024 00:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:09.285 Making settings changes with rpc: 00:10:09.285 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:09.285 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:09.546 Check default vs. modified settings: 00:10:09.546 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:09.546 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78115 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78115 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:09.807 Setting action_on_timeout is changed as expected. 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78115 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78115 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:09.807 Setting timeout_us is changed as expected. 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78115 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78115 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:09.807 Setting timeout_admin_us is changed as expected. 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78115 /tmp/settings_modified_78115 00:10:09.807 00:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78147 00:10:09.807 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78147 ']' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78147 00:10:09.807 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:09.807 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.807 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78147 00:10:09.807 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.808 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.808 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78147' 00:10:09.808 killing process with pid 78147 00:10:09.808 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78147 00:10:09.808 00:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78147 00:10:10.069 RPC TIMEOUT SETTING TEST PASSED. 00:10:10.069 00:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:10.069 00:10:10.069 real 0m2.297s 00:10:10.069 user 0m4.351s 00:10:10.069 sys 0m0.585s 00:10:10.069 00:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.069 ************************************ 00:10:10.069 END TEST nvme_rpc_timeouts 00:10:10.069 ************************************ 00:10:10.069 00:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:10.069 00:42:02 -- spdk/autotest.sh@239 -- # uname -s 00:10:10.069 00:42:02 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:10.069 00:42:02 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.069 00:42:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:10.069 00:42:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.069 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:10:10.069 ************************************ 00:10:10.069 START TEST sw_hotplug 00:10:10.069 ************************************ 00:10:10.069 00:42:02 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.331 * Looking for test storage... 00:10:10.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.331 00:42:02 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:10.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.331 --rc genhtml_branch_coverage=1 00:10:10.331 --rc genhtml_function_coverage=1 00:10:10.331 --rc genhtml_legend=1 00:10:10.331 --rc geninfo_all_blocks=1 00:10:10.331 --rc geninfo_unexecuted_blocks=1 00:10:10.331 00:10:10.331 ' 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:10.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.331 --rc genhtml_branch_coverage=1 00:10:10.331 --rc genhtml_function_coverage=1 00:10:10.331 --rc genhtml_legend=1 00:10:10.331 --rc geninfo_all_blocks=1 00:10:10.331 --rc geninfo_unexecuted_blocks=1 00:10:10.331 00:10:10.331 ' 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:10.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.331 --rc genhtml_branch_coverage=1 00:10:10.331 --rc genhtml_function_coverage=1 00:10:10.331 --rc genhtml_legend=1 00:10:10.331 --rc geninfo_all_blocks=1 00:10:10.331 --rc geninfo_unexecuted_blocks=1 00:10:10.331 00:10:10.331 ' 00:10:10.331 00:42:02 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:10.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.331 --rc genhtml_branch_coverage=1 00:10:10.331 --rc genhtml_function_coverage=1 00:10:10.331 --rc genhtml_legend=1 00:10:10.331 --rc geninfo_all_blocks=1 00:10:10.331 --rc geninfo_unexecuted_blocks=1 00:10:10.331 00:10:10.331 ' 00:10:10.331 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:10.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:10.853 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.853 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.853 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.853 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.853 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:10.853 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:10.853 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:10.853 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:10.853 00:42:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:10.854 00:42:02 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:10.854 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:10.854 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:10.854 00:42:02 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:11.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.426 Waiting for block devices as requested 00:10:11.426 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.426 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.688 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.688 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.981 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:16.981 00:42:08 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:16.981 00:42:08 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:17.242 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:17.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.243 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:17.816 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:17.816 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.816 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.077 00:42:09 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:18.077 00:42:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78998 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:18.077 00:42:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:18.077 00:42:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:18.077 00:42:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:18.077 00:42:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:18.077 00:42:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:18.077 00:42:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:18.338 Initializing NVMe Controllers 00:10:18.338 Attaching to 0000:00:10.0 00:10:18.339 Attaching to 0000:00:11.0 00:10:18.339 Attached to 0000:00:10.0 00:10:18.339 Attached to 0000:00:11.0 00:10:18.339 Initialization complete. Starting I/O... 00:10:18.339 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:18.339 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:18.339 00:10:19.283 QEMU NVMe Ctrl (12340 ): 2048 I/Os completed (+2048) 00:10:19.283 QEMU NVMe Ctrl (12341 ): 2052 I/Os completed (+2052) 00:10:19.283 00:10:20.255 QEMU NVMe Ctrl (12340 ): 4852 I/Os completed (+2804) 00:10:20.255 QEMU NVMe Ctrl (12341 ): 4857 I/Os completed (+2805) 00:10:20.255 00:10:21.199 QEMU NVMe Ctrl (12340 ): 7672 I/Os completed (+2820) 00:10:21.199 QEMU NVMe Ctrl (12341 ): 7677 I/Os completed (+2820) 00:10:21.199 00:10:22.587 QEMU NVMe Ctrl (12340 ): 10512 I/Os completed (+2840) 00:10:22.587 QEMU NVMe Ctrl (12341 ): 10533 I/Os completed (+2856) 00:10:22.587 00:10:23.160 QEMU NVMe Ctrl (12340 ): 14840 I/Os completed (+4328) 00:10:23.160 QEMU NVMe Ctrl (12341 ): 14865 I/Os completed (+4332) 00:10:23.160 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.105 [2024-11-17 00:42:16.019318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:24.105 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.105 [2024-11-17 00:42:16.020229] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.020261] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.020276] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.020290] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.105 [2024-11-17 00:42:16.021521] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.021562] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.021574] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.021586] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.105 [2024-11-17 00:42:16.040772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:24.105 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.105 [2024-11-17 00:42:16.041556] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.041577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.041590] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.041601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.105 [2024-11-17 00:42:16.042503] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.042529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.042544] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 [2024-11-17 00:42:16.042554] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.105 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:24.105 EAL: Scan for (pci) bus failed. 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.105 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.366 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.366 Attaching to 0000:00:10.0 00:10:24.366 Attached to 0000:00:10.0 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.366 00:42:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.366 Attaching to 0000:00:11.0 00:10:24.366 Attached to 0000:00:11.0 00:10:25.311 QEMU NVMe Ctrl (12340 ): 4347 I/Os completed (+4347) 00:10:25.311 QEMU NVMe Ctrl (12341 ): 3948 I/Os completed (+3948) 00:10:25.311 00:10:26.254 QEMU NVMe Ctrl (12340 ): 8779 I/Os completed (+4432) 00:10:26.254 QEMU NVMe Ctrl (12341 ): 8380 I/Os completed (+4432) 00:10:26.254 00:10:27.197 QEMU NVMe Ctrl (12340 ): 13203 I/Os completed (+4424) 00:10:27.197 QEMU NVMe Ctrl (12341 ): 12804 I/Os completed (+4424) 00:10:27.197 00:10:28.584 QEMU NVMe Ctrl (12340 ): 17607 I/Os completed (+4404) 00:10:28.584 QEMU NVMe Ctrl (12341 ): 17208 I/Os completed (+4404) 00:10:28.584 00:10:29.158 QEMU NVMe Ctrl (12340 ): 21944 I/Os completed (+4337) 00:10:29.158 QEMU NVMe Ctrl (12341 ): 21606 I/Os completed (+4398) 00:10:29.158 00:10:30.543 QEMU NVMe Ctrl (12340 ): 26353 I/Os completed (+4409) 00:10:30.543 QEMU NVMe Ctrl (12341 ): 26018 I/Os completed (+4412) 00:10:30.543 00:10:31.487 QEMU NVMe Ctrl (12340 ): 30761 I/Os completed (+4408) 00:10:31.487 QEMU NVMe Ctrl (12341 ): 30426 I/Os completed (+4408) 00:10:31.487 00:10:32.429 QEMU NVMe Ctrl (12340 ): 35177 I/Os completed (+4416) 00:10:32.429 QEMU NVMe Ctrl (12341 ): 34842 I/Os completed (+4416) 00:10:32.429 00:10:33.372 QEMU NVMe Ctrl (12340 ): 39517 I/Os completed (+4340) 00:10:33.372 QEMU NVMe Ctrl (12341 ): 39182 I/Os completed (+4340) 00:10:33.372 00:10:34.317 QEMU NVMe Ctrl (12340 ): 43352 I/Os completed (+3835) 00:10:34.317 QEMU NVMe Ctrl (12341 ): 43018 I/Os completed (+3836) 00:10:34.317 00:10:35.261 QEMU NVMe Ctrl (12340 ): 46300 I/Os completed (+2948) 00:10:35.261 QEMU NVMe Ctrl (12341 ): 45967 I/Os completed (+2949) 00:10:35.261 00:10:36.205 QEMU NVMe Ctrl (12340 ): 49248 I/Os completed (+2948) 00:10:36.205 QEMU NVMe Ctrl (12341 ): 48924 I/Os completed (+2957) 00:10:36.205 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.467 [2024-11-17 00:42:28.325499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:36.467 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:36.467 [2024-11-17 00:42:28.326793] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.326850] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.326867] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.326888] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.467 [2024-11-17 00:42:28.329099] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.329174] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.329190] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.329206] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.467 [2024-11-17 00:42:28.344849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:36.467 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:36.467 [2024-11-17 00:42:28.345982] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.346039] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.346058] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.346073] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.467 [2024-11-17 00:42:28.347310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.347377] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.347400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 [2024-11-17 00:42:28.347414] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.467 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:36.467 EAL: Scan for (pci) bus failed. 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.467 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:36.729 Attaching to 0000:00:10.0 00:10:36.729 Attached to 0000:00:10.0 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.729 00:42:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.729 Attaching to 0000:00:11.0 00:10:36.729 Attached to 0000:00:11.0 00:10:37.300 QEMU NVMe Ctrl (12340 ): 1887 I/Os completed (+1887) 00:10:37.300 QEMU NVMe Ctrl (12341 ): 1624 I/Os completed (+1624) 00:10:37.300 00:10:38.241 QEMU NVMe Ctrl (12340 ): 5094 I/Os completed (+3207) 00:10:38.241 QEMU NVMe Ctrl (12341 ): 4961 I/Os completed (+3337) 00:10:38.241 00:10:39.183 QEMU NVMe Ctrl (12340 ): 8212 I/Os completed (+3118) 00:10:39.183 QEMU NVMe Ctrl (12341 ): 8075 I/Os completed (+3114) 00:10:39.183 00:10:40.569 QEMU NVMe Ctrl (12340 ): 11252 I/Os completed (+3040) 00:10:40.569 QEMU NVMe Ctrl (12341 ): 11146 I/Os completed (+3071) 00:10:40.569 00:10:41.529 QEMU NVMe Ctrl (12340 ): 14233 I/Os completed (+2981) 00:10:41.529 QEMU NVMe Ctrl (12341 ): 14246 I/Os completed (+3100) 00:10:41.529 00:10:42.179 QEMU NVMe Ctrl (12340 ): 17417 I/Os completed (+3184) 00:10:42.179 QEMU NVMe Ctrl (12341 ): 17467 I/Os completed (+3221) 00:10:42.179 00:10:43.575 QEMU NVMe Ctrl (12340 ): 21700 I/Os completed (+4283) 00:10:43.575 QEMU NVMe Ctrl (12341 ): 21755 I/Os completed (+4288) 00:10:43.575 00:10:44.508 QEMU NVMe Ctrl (12340 ): 26286 I/Os completed (+4586) 00:10:44.509 QEMU NVMe Ctrl (12341 ): 26339 I/Os completed (+4584) 00:10:44.509 00:10:45.448 QEMU NVMe Ctrl (12340 ): 30320 I/Os completed (+4034) 00:10:45.448 QEMU NVMe Ctrl (12341 ): 30371 I/Os completed (+4032) 00:10:45.448 00:10:46.391 QEMU NVMe Ctrl (12340 ): 34145 I/Os completed (+3825) 00:10:46.392 QEMU NVMe Ctrl (12341 ): 34315 I/Os completed (+3944) 00:10:46.392 00:10:47.338 QEMU NVMe Ctrl (12340 ): 37377 I/Os completed (+3232) 00:10:47.338 QEMU NVMe Ctrl (12341 ): 37547 I/Os completed (+3232) 00:10:47.338 00:10:48.288 QEMU NVMe Ctrl (12340 ): 40385 I/Os completed (+3008) 00:10:48.288 QEMU NVMe Ctrl (12341 ): 40561 I/Os completed (+3014) 00:10:48.288 00:10:48.861 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.862 [2024-11-17 00:42:40.660953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:48.862 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:48.862 [2024-11-17 00:42:40.662224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.662278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.662294] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.662319] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:48.862 [2024-11-17 00:42:40.664650] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.664726] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.664743] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.664758] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.862 [2024-11-17 00:42:40.693787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:48.862 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:48.862 [2024-11-17 00:42:40.695940] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.696017] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.696052] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.696081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:48.862 [2024-11-17 00:42:40.698022] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.698077] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.698095] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 [2024-11-17 00:42:40.698109] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.862 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.862 Attaching to 0000:00:10.0 00:10:48.862 Attached to 0000:00:10.0 00:10:49.122 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:49.122 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.122 00:42:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:49.122 Attaching to 0000:00:11.0 00:10:49.122 Attached to 0000:00:11.0 00:10:49.122 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:49.122 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:49.122 [2024-11-17 00:42:40.958590] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:01.359 00:42:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:01.359 00:42:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:01.359 00:42:52 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.94 00:11:01.359 00:42:52 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.94 00:11:01.359 00:42:52 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:01.359 00:42:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:11:01.359 00:42:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:11:01.359 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 00:42:52 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78998 00:11:07.949 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78998) - No such process 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78998 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79540 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79540 00:11:07.949 00:42:58 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79540 ']' 00:11:07.949 00:42:58 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.949 00:42:58 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.949 00:42:58 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.949 00:42:58 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.949 00:42:58 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.949 00:42:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.949 [2024-11-17 00:42:59.054604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:07.949 [2024-11-17 00:42:59.054767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79540 ] 00:11:07.949 [2024-11-17 00:42:59.205849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.949 [2024-11-17 00:42:59.256767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:07.949 00:42:59 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:07.949 00:42:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.518 00:43:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.518 00:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.518 00:43:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:14.518 00:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.518 [2024-11-17 00:43:06.004135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:14.518 [2024-11-17 00:43:06.005218] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.005252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.005265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 [2024-11-17 00:43:06.005277] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.005287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.005294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 [2024-11-17 00:43:06.005303] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.005309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.005316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 [2024-11-17 00:43:06.005323] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.005330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.005337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 [2024-11-17 00:43:06.404140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:14.519 [2024-11-17 00:43:06.405187] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.405217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.405226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 [2024-11-17 00:43:06.405237] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.405243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.405251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 [2024-11-17 00:43:06.405258] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.405266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.405272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 [2024-11-17 00:43:06.405281] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.519 [2024-11-17 00:43:06.405287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.519 [2024-11-17 00:43:06.405295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.519 00:43:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.519 00:43:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.519 00:43:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:14.519 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.777 00:43:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.982 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.982 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.983 00:43:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.983 00:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.983 00:43:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.983 00:43:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.983 00:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.983 00:43:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.983 00:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.983 [2024-11-17 00:43:18.904304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:26.983 [2024-11-17 00:43:18.905408] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.983 [2024-11-17 00:43:18.905438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.983 [2024-11-17 00:43:18.905451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.983 [2024-11-17 00:43:18.905462] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.983 [2024-11-17 00:43:18.905470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.983 [2024-11-17 00:43:18.905477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.983 [2024-11-17 00:43:18.905485] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.983 [2024-11-17 00:43:18.905491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.983 [2024-11-17 00:43:18.905499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.983 [2024-11-17 00:43:18.905505] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.983 [2024-11-17 00:43:18.905513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.983 [2024-11-17 00:43:18.905519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.548 [2024-11-17 00:43:19.304306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:27.548 [2024-11-17 00:43:19.305321] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.548 [2024-11-17 00:43:19.305352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.548 [2024-11-17 00:43:19.305372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.548 [2024-11-17 00:43:19.305382] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.549 [2024-11-17 00:43:19.305389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.549 [2024-11-17 00:43:19.305398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.549 [2024-11-17 00:43:19.305404] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.549 [2024-11-17 00:43:19.305412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.549 [2024-11-17 00:43:19.305418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.549 [2024-11-17 00:43:19.305427] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.549 [2024-11-17 00:43:19.305434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.549 [2024-11-17 00:43:19.305441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.549 00:43:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.549 00:43:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.549 00:43:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.549 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.807 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.807 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.807 00:43:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.044 00:43:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.044 00:43:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.044 00:43:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.044 00:43:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.044 00:43:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.044 00:43:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:40.044 00:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:40.044 [2024-11-17 00:43:31.804485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:40.044 [2024-11-17 00:43:31.805519] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.044 [2024-11-17 00:43:31.805555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.044 [2024-11-17 00:43:31.805569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.044 [2024-11-17 00:43:31.805580] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.044 [2024-11-17 00:43:31.805589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.044 [2024-11-17 00:43:31.805596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.044 [2024-11-17 00:43:31.805603] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.044 [2024-11-17 00:43:31.805610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.044 [2024-11-17 00:43:31.805618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.044 [2024-11-17 00:43:31.805624] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.044 [2024-11-17 00:43:31.805632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.044 [2024-11-17 00:43:31.805638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.303 [2024-11-17 00:43:32.204491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:40.303 [2024-11-17 00:43:32.205482] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.303 [2024-11-17 00:43:32.205513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.303 [2024-11-17 00:43:32.205522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.303 [2024-11-17 00:43:32.205534] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.303 [2024-11-17 00:43:32.205541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.303 [2024-11-17 00:43:32.205551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.303 [2024-11-17 00:43:32.205557] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.303 [2024-11-17 00:43:32.205565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.303 [2024-11-17 00:43:32.205571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.303 [2024-11-17 00:43:32.205578] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.303 [2024-11-17 00:43:32.205584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.303 [2024-11-17 00:43:32.205592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.303 00:43:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.303 00:43:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.303 00:43:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:40.303 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.562 00:43:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.66 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.66 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.66 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.66 2 00:11:52.765 remove_attach_helper took 44.66s to complete (handling 2 nvme drive(s)) 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:52.765 00:43:44 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:52.765 00:43:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.324 00:43:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.324 00:43:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.324 00:43:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:59.324 00:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:59.324 [2024-11-17 00:43:50.691497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:59.324 [2024-11-17 00:43:50.692285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:50.692314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.324 [2024-11-17 00:43:50.692327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.324 [2024-11-17 00:43:50.692340] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:50.692349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.324 [2024-11-17 00:43:50.692366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.324 [2024-11-17 00:43:50.692374] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:50.692381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.324 [2024-11-17 00:43:50.692393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.324 [2024-11-17 00:43:50.692400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:50.692409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.324 [2024-11-17 00:43:50.692415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.324 00:43:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.324 00:43:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.324 00:43:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:59.324 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:59.324 [2024-11-17 00:43:51.291508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:59.324 [2024-11-17 00:43:51.292245] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:51.292278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.324 [2024-11-17 00:43:51.292288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.324 [2024-11-17 00:43:51.292301] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:51.292308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.324 [2024-11-17 00:43:51.292316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.324 [2024-11-17 00:43:51.292322] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:51.292330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.324 [2024-11-17 00:43:51.292337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.324 [2024-11-17 00:43:51.292345] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.324 [2024-11-17 00:43:51.292351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.325 [2024-11-17 00:43:51.292370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.891 00:43:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.891 00:43:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.891 00:43:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.891 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:00.149 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:00.149 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.149 00:43:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:12.351 00:44:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:12.351 00:44:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:12.351 00:44:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:12.351 00:44:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.351 00:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.351 00:44:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.351 00:44:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.351 00:44:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.351 00:44:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.351 00:44:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.351 [2024-11-17 00:44:04.091688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:12.351 [2024-11-17 00:44:04.092471] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.351 [2024-11-17 00:44:04.092491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.351 [2024-11-17 00:44:04.092503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.351 [2024-11-17 00:44:04.092514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.351 [2024-11-17 00:44:04.092523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.351 [2024-11-17 00:44:04.092530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.351 [2024-11-17 00:44:04.092548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.351 [2024-11-17 00:44:04.092555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.351 [2024-11-17 00:44:04.092563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.351 [2024-11-17 00:44:04.092569] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.351 [2024-11-17 00:44:04.092577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.351 [2024-11-17 00:44:04.092583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.351 00:44:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:12.351 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:12.610 [2024-11-17 00:44:04.491690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:12.610 [2024-11-17 00:44:04.492605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.610 [2024-11-17 00:44:04.492636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.610 [2024-11-17 00:44:04.492646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.610 [2024-11-17 00:44:04.492658] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.610 [2024-11-17 00:44:04.492666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.610 [2024-11-17 00:44:04.492675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.610 [2024-11-17 00:44:04.492682] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.610 [2024-11-17 00:44:04.492691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.610 [2024-11-17 00:44:04.492699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.610 [2024-11-17 00:44:04.492707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.610 [2024-11-17 00:44:04.492714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.610 [2024-11-17 00:44:04.492723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.610 00:44:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.610 00:44:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.610 00:44:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:12.610 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.869 00:44:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.072 00:44:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.072 00:44:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.072 00:44:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.072 00:44:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.072 00:44:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.072 00:44:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:25.072 00:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:25.072 [2024-11-17 00:44:16.991909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:25.072 [2024-11-17 00:44:16.993032] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.072 [2024-11-17 00:44:16.993130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.072 [2024-11-17 00:44:16.993172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.072 [2024-11-17 00:44:16.993211] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.072 [2024-11-17 00:44:16.993253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.072 [2024-11-17 00:44:16.993278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.072 [2024-11-17 00:44:16.993305] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.072 [2024-11-17 00:44:16.993329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.072 [2024-11-17 00:44:16.993390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.072 [2024-11-17 00:44:16.993416] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.072 [2024-11-17 00:44:16.993443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.072 [2024-11-17 00:44:16.993467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.334 [2024-11-17 00:44:17.391930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:25.334 [2024-11-17 00:44:17.392980] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.334 [2024-11-17 00:44:17.393012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.334 [2024-11-17 00:44:17.393025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.334 [2024-11-17 00:44:17.393038] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.334 [2024-11-17 00:44:17.393048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.334 [2024-11-17 00:44:17.393059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.334 [2024-11-17 00:44:17.393067] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.334 [2024-11-17 00:44:17.393081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.334 [2024-11-17 00:44:17.393090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.334 [2024-11-17 00:44:17.393100] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.334 [2024-11-17 00:44:17.393108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.334 [2024-11-17 00:44:17.393118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.595 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:25.595 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:25.595 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:25.595 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.596 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.596 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.596 00:44:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.596 00:44:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.596 00:44:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.596 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:25.596 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:25.596 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:25.596 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:25.596 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:25.857 00:44:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.23 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.23 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:12:38.075 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:38.075 00:44:29 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79540 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79540 ']' 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79540 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79540 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.075 killing process with pid 79540 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79540' 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79540 00:12:38.075 00:44:29 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79540 00:12:38.336 00:44:30 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:38.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:39.169 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:39.169 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:39.169 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.169 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.169 00:12:39.169 real 2m29.000s 00:12:39.169 user 1m49.257s 00:12:39.169 sys 0m18.263s 00:12:39.169 00:44:31 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.169 ************************************ 00:12:39.169 END TEST sw_hotplug 00:12:39.169 ************************************ 00:12:39.170 00:44:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.170 00:44:31 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:39.170 00:44:31 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:39.170 00:44:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:39.170 00:44:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.170 00:44:31 -- common/autotest_common.sh@10 -- # set +x 00:12:39.170 ************************************ 00:12:39.170 START TEST nvme_xnvme 00:12:39.170 ************************************ 00:12:39.170 00:44:31 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:39.432 * Looking for test storage... 00:12:39.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:39.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.432 --rc genhtml_branch_coverage=1 00:12:39.432 --rc genhtml_function_coverage=1 00:12:39.432 --rc genhtml_legend=1 00:12:39.432 --rc geninfo_all_blocks=1 00:12:39.432 --rc geninfo_unexecuted_blocks=1 00:12:39.432 00:12:39.432 ' 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:39.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.432 --rc genhtml_branch_coverage=1 00:12:39.432 --rc genhtml_function_coverage=1 00:12:39.432 --rc genhtml_legend=1 00:12:39.432 --rc geninfo_all_blocks=1 00:12:39.432 --rc geninfo_unexecuted_blocks=1 00:12:39.432 00:12:39.432 ' 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:39.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.432 --rc genhtml_branch_coverage=1 00:12:39.432 --rc genhtml_function_coverage=1 00:12:39.432 --rc genhtml_legend=1 00:12:39.432 --rc geninfo_all_blocks=1 00:12:39.432 --rc geninfo_unexecuted_blocks=1 00:12:39.432 00:12:39.432 ' 00:12:39.432 00:44:31 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:39.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.432 --rc genhtml_branch_coverage=1 00:12:39.432 --rc genhtml_function_coverage=1 00:12:39.432 --rc genhtml_legend=1 00:12:39.432 --rc geninfo_all_blocks=1 00:12:39.432 --rc geninfo_unexecuted_blocks=1 00:12:39.432 00:12:39.432 ' 00:12:39.432 00:44:31 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.432 00:44:31 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.432 00:44:31 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.432 00:44:31 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.432 00:44:31 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.432 00:44:31 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:39.433 00:44:31 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.433 00:44:31 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:39.433 00:44:31 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:39.433 00:44:31 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.433 00:44:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.433 ************************************ 00:12:39.433 START TEST xnvme_to_malloc_dd_copy 00:12:39.433 ************************************ 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:39.433 00:44:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:39.433 { 00:12:39.433 "subsystems": [ 00:12:39.433 { 00:12:39.433 "subsystem": "bdev", 00:12:39.433 "config": [ 00:12:39.433 { 00:12:39.433 "params": { 00:12:39.433 "block_size": 512, 00:12:39.433 "num_blocks": 2097152, 00:12:39.433 "name": "malloc0" 00:12:39.433 }, 00:12:39.433 "method": "bdev_malloc_create" 00:12:39.433 }, 00:12:39.433 { 00:12:39.433 "params": { 00:12:39.433 "io_mechanism": "libaio", 00:12:39.433 "filename": "/dev/nullb0", 00:12:39.433 "name": "null0" 00:12:39.433 }, 00:12:39.433 "method": "bdev_xnvme_create" 00:12:39.433 }, 00:12:39.433 { 00:12:39.433 "method": "bdev_wait_for_examine" 00:12:39.433 } 00:12:39.433 ] 00:12:39.433 } 00:12:39.433 ] 00:12:39.433 } 00:12:39.433 [2024-11-17 00:44:31.479472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:39.433 [2024-11-17 00:44:31.479601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80904 ] 00:12:39.694 [2024-11-17 00:44:31.630936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.694 [2024-11-17 00:44:31.674192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.070  [2024-11-17T00:44:34.068Z] Copying: 306/1024 [MB] (306 MBps) [2024-11-17T00:44:35.063Z] Copying: 613/1024 [MB] (307 MBps) [2024-11-17T00:44:35.322Z] Copying: 920/1024 [MB] (306 MBps) [2024-11-17T00:44:35.888Z] Copying: 1024/1024 [MB] (average 306 MBps) 00:12:43.825 00:12:43.825 00:44:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:43.825 00:44:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:43.825 00:44:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:43.825 00:44:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.825 { 00:12:43.825 "subsystems": [ 00:12:43.825 { 00:12:43.825 "subsystem": "bdev", 00:12:43.825 "config": [ 00:12:43.825 { 00:12:43.825 "params": { 00:12:43.825 "block_size": 512, 00:12:43.825 "num_blocks": 2097152, 00:12:43.825 "name": "malloc0" 00:12:43.825 }, 00:12:43.825 "method": "bdev_malloc_create" 00:12:43.825 }, 00:12:43.825 { 00:12:43.825 "params": { 00:12:43.825 "io_mechanism": "libaio", 00:12:43.825 "filename": "/dev/nullb0", 00:12:43.825 "name": "null0" 00:12:43.825 }, 00:12:43.825 "method": "bdev_xnvme_create" 00:12:43.825 }, 00:12:43.825 { 00:12:43.825 "method": "bdev_wait_for_examine" 00:12:43.825 } 00:12:43.825 ] 00:12:43.825 } 00:12:43.825 ] 00:12:43.825 } 00:12:43.825 [2024-11-17 00:44:35.666828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:43.825 [2024-11-17 00:44:35.666937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80958 ] 00:12:43.825 [2024-11-17 00:44:35.814470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.825 [2024-11-17 00:44:35.855522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.201  [2024-11-17T00:44:38.200Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-17T00:44:39.135Z] Copying: 620/1024 [MB] (310 MBps) [2024-11-17T00:44:39.702Z] Copying: 930/1024 [MB] (310 MBps) [2024-11-17T00:44:39.961Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:12:47.898 00:12:47.898 00:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:47.898 00:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:47.898 00:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:47.898 00:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:47.898 00:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:47.898 00:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:47.898 { 00:12:47.898 "subsystems": [ 00:12:47.898 { 00:12:47.898 "subsystem": "bdev", 00:12:47.898 "config": [ 00:12:47.898 { 00:12:47.898 "params": { 00:12:47.898 "block_size": 512, 00:12:47.898 "num_blocks": 2097152, 00:12:47.898 "name": "malloc0" 00:12:47.898 }, 00:12:47.898 "method": "bdev_malloc_create" 00:12:47.898 }, 00:12:47.898 { 00:12:47.898 "params": { 00:12:47.898 "io_mechanism": "io_uring", 00:12:47.898 "filename": "/dev/nullb0", 00:12:47.898 "name": "null0" 00:12:47.898 }, 00:12:47.898 "method": "bdev_xnvme_create" 00:12:47.898 }, 00:12:47.898 { 00:12:47.898 "method": "bdev_wait_for_examine" 00:12:47.898 } 00:12:47.898 ] 00:12:47.898 } 00:12:47.898 ] 00:12:47.898 } 00:12:47.898 [2024-11-17 00:44:39.807240] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:47.898 [2024-11-17 00:44:39.807367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81012 ] 00:12:47.898 [2024-11-17 00:44:39.954254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.156 [2024-11-17 00:44:39.990211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.534  [2024-11-17T00:44:42.533Z] Copying: 318/1024 [MB] (318 MBps) [2024-11-17T00:44:43.468Z] Copying: 636/1024 [MB] (318 MBps) [2024-11-17T00:44:43.468Z] Copying: 955/1024 [MB] (318 MBps) [2024-11-17T00:44:44.036Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:51.973 00:12:51.973 00:44:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:51.973 00:44:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:51.973 00:44:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:51.973 00:44:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:51.973 { 00:12:51.973 "subsystems": [ 00:12:51.973 { 00:12:51.973 "subsystem": "bdev", 00:12:51.973 "config": [ 00:12:51.973 { 00:12:51.973 "params": { 00:12:51.973 "block_size": 512, 00:12:51.973 "num_blocks": 2097152, 00:12:51.973 "name": "malloc0" 00:12:51.973 }, 00:12:51.973 "method": "bdev_malloc_create" 00:12:51.973 }, 00:12:51.973 { 00:12:51.973 "params": { 00:12:51.973 "io_mechanism": "io_uring", 00:12:51.973 "filename": "/dev/nullb0", 00:12:51.973 "name": "null0" 00:12:51.973 }, 00:12:51.973 "method": "bdev_xnvme_create" 00:12:51.973 }, 00:12:51.973 { 00:12:51.973 "method": "bdev_wait_for_examine" 00:12:51.973 } 00:12:51.973 ] 00:12:51.973 } 00:12:51.973 ] 00:12:51.973 } 00:12:51.973 [2024-11-17 00:44:43.812725] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:51.973 [2024-11-17 00:44:43.812843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81066 ] 00:12:51.973 [2024-11-17 00:44:43.959596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.973 [2024-11-17 00:44:43.995193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.349  [2024-11-17T00:44:46.347Z] Copying: 322/1024 [MB] (322 MBps) [2024-11-17T00:44:47.283Z] Copying: 645/1024 [MB] (323 MBps) [2024-11-17T00:44:47.542Z] Copying: 969/1024 [MB] (323 MBps) [2024-11-17T00:44:47.800Z] Copying: 1024/1024 [MB] (average 323 MBps) 00:12:55.737 00:12:55.737 00:44:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:55.737 00:44:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:55.737 00:12:55.737 real 0m16.356s 00:12:55.737 user 0m13.517s 00:12:55.737 sys 0m2.358s 00:12:55.737 00:44:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.737 ************************************ 00:12:55.737 END TEST xnvme_to_malloc_dd_copy 00:12:55.737 ************************************ 00:12:55.737 00:44:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:55.737 00:44:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:55.737 00:44:47 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:55.737 00:44:47 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.737 00:44:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.737 ************************************ 00:12:55.737 START TEST xnvme_bdevperf 00:12:55.737 ************************************ 00:12:55.737 00:44:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:55.737 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:55.737 00:44:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:55.737 00:44:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:55.995 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:55.996 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:55.996 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:55.996 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:55.996 00:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:55.996 00:44:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:55.996 00:44:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.996 { 00:12:55.996 "subsystems": [ 00:12:55.996 { 00:12:55.996 "subsystem": "bdev", 00:12:55.996 "config": [ 00:12:55.996 { 00:12:55.996 "params": { 00:12:55.996 "io_mechanism": "libaio", 00:12:55.996 "filename": "/dev/nullb0", 00:12:55.996 "name": "null0" 00:12:55.996 }, 00:12:55.996 "method": "bdev_xnvme_create" 00:12:55.996 }, 00:12:55.996 { 00:12:55.996 "method": "bdev_wait_for_examine" 00:12:55.996 } 00:12:55.996 ] 00:12:55.996 } 00:12:55.996 ] 00:12:55.996 } 00:12:55.996 [2024-11-17 00:44:47.879208] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:55.996 [2024-11-17 00:44:47.879325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81142 ] 00:12:55.996 [2024-11-17 00:44:48.027876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.254 [2024-11-17 00:44:48.071126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.254 Running I/O for 5 seconds... 00:12:58.124 209728.00 IOPS, 819.25 MiB/s [2024-11-17T00:44:51.561Z] 209824.00 IOPS, 819.62 MiB/s [2024-11-17T00:44:52.494Z] 209898.67 IOPS, 819.92 MiB/s [2024-11-17T00:44:53.429Z] 209952.00 IOPS, 820.12 MiB/s 00:13:01.366 Latency(us) 00:13:01.366 [2024-11-17T00:44:53.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.366 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:01.366 null0 : 5.00 209962.29 820.17 0.00 0.00 302.70 107.13 1518.67 00:13:01.366 [2024-11-17T00:44:53.429Z] =================================================================================================================== 00:13:01.366 [2024-11-17T00:44:53.429Z] Total : 209962.29 820.17 0.00 0.00 302.70 107.13 1518.67 00:13:01.366 00:44:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:01.366 00:44:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:01.366 00:44:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:01.366 00:44:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:01.366 00:44:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.366 00:44:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.366 { 00:13:01.366 "subsystems": [ 00:13:01.366 { 00:13:01.366 "subsystem": "bdev", 00:13:01.366 "config": [ 00:13:01.366 { 00:13:01.366 "params": { 00:13:01.366 "io_mechanism": "io_uring", 00:13:01.366 "filename": "/dev/nullb0", 00:13:01.366 "name": "null0" 00:13:01.366 }, 00:13:01.366 "method": "bdev_xnvme_create" 00:13:01.366 }, 00:13:01.366 { 00:13:01.366 "method": "bdev_wait_for_examine" 00:13:01.366 } 00:13:01.366 ] 00:13:01.366 } 00:13:01.366 ] 00:13:01.366 } 00:13:01.366 [2024-11-17 00:44:53.371025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:01.366 [2024-11-17 00:44:53.371145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81205 ] 00:13:01.626 [2024-11-17 00:44:53.518960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.626 [2024-11-17 00:44:53.561895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.626 Running I/O for 5 seconds... 00:13:03.935 239552.00 IOPS, 935.75 MiB/s [2024-11-17T00:44:56.932Z] 239456.00 IOPS, 935.38 MiB/s [2024-11-17T00:44:57.866Z] 239424.00 IOPS, 935.25 MiB/s [2024-11-17T00:44:58.803Z] 239408.00 IOPS, 935.19 MiB/s 00:13:06.740 Latency(us) 00:13:06.740 [2024-11-17T00:44:58.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.740 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:06.740 null0 : 5.00 239285.35 934.71 0.00 0.00 265.41 147.30 1480.86 00:13:06.740 [2024-11-17T00:44:58.803Z] =================================================================================================================== 00:13:06.740 [2024-11-17T00:44:58.803Z] Total : 239285.35 934.71 0.00 0.00 265.41 147.30 1480.86 00:13:06.740 00:44:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:06.740 00:44:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:07.002 00:13:07.002 real 0m11.031s 00:13:07.002 user 0m8.632s 00:13:07.002 sys 0m2.162s 00:13:07.002 00:44:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.002 00:44:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.002 ************************************ 00:13:07.002 END TEST xnvme_bdevperf 00:13:07.002 ************************************ 00:13:07.002 00:13:07.002 real 0m27.658s 00:13:07.002 user 0m22.271s 00:13:07.002 sys 0m4.644s 00:13:07.002 00:44:58 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.002 ************************************ 00:13:07.002 END TEST nvme_xnvme 00:13:07.002 ************************************ 00:13:07.002 00:44:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.002 00:44:58 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:07.002 00:44:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:07.002 00:44:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.002 00:44:58 -- common/autotest_common.sh@10 -- # set +x 00:13:07.002 ************************************ 00:13:07.002 START TEST blockdev_xnvme 00:13:07.002 ************************************ 00:13:07.002 00:44:58 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:07.002 * Looking for test storage... 00:13:07.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:07.002 00:44:58 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:07.002 00:44:58 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:07.002 00:44:58 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:07.002 00:44:59 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:07.002 00:44:59 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.003 00:44:59 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:07.003 00:44:59 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.003 00:44:59 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.003 00:44:59 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.003 00:44:59 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:07.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.003 --rc genhtml_branch_coverage=1 00:13:07.003 --rc genhtml_function_coverage=1 00:13:07.003 --rc genhtml_legend=1 00:13:07.003 --rc geninfo_all_blocks=1 00:13:07.003 --rc geninfo_unexecuted_blocks=1 00:13:07.003 00:13:07.003 ' 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:07.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.003 --rc genhtml_branch_coverage=1 00:13:07.003 --rc genhtml_function_coverage=1 00:13:07.003 --rc genhtml_legend=1 00:13:07.003 --rc geninfo_all_blocks=1 00:13:07.003 --rc geninfo_unexecuted_blocks=1 00:13:07.003 00:13:07.003 ' 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:07.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.003 --rc genhtml_branch_coverage=1 00:13:07.003 --rc genhtml_function_coverage=1 00:13:07.003 --rc genhtml_legend=1 00:13:07.003 --rc geninfo_all_blocks=1 00:13:07.003 --rc geninfo_unexecuted_blocks=1 00:13:07.003 00:13:07.003 ' 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:07.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.003 --rc genhtml_branch_coverage=1 00:13:07.003 --rc genhtml_function_coverage=1 00:13:07.003 --rc genhtml_legend=1 00:13:07.003 --rc geninfo_all_blocks=1 00:13:07.003 --rc geninfo_unexecuted_blocks=1 00:13:07.003 00:13:07.003 ' 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81342 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81342 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81342 ']' 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.003 00:44:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.003 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:07.263 [2024-11-17 00:44:59.139777] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:07.263 [2024-11-17 00:44:59.139932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81342 ] 00:13:07.263 [2024-11-17 00:44:59.289583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.263 [2024-11-17 00:44:59.319983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.198 00:44:59 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.198 00:44:59 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:08.198 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:08.198 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:08.198 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:08.198 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:08.198 00:44:59 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:08.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.456 Waiting for block devices as requested 00:13:08.456 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.716 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.716 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.716 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.040 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:14.040 nvme0n1 00:13:14.040 nvme1n1 00:13:14.040 nvme2n1 00:13:14.040 nvme2n2 00:13:14.040 nvme2n3 00:13:14.040 nvme3n1 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.040 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.040 00:45:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2291c43b-0077-4046-b401-120e0d5314e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2291c43b-0077-4046-b401-120e0d5314e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "86531951-92cd-4711-83f0-ce5bcf81494c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "86531951-92cd-4711-83f0-ce5bcf81494c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4af134a4-aeb4-4996-88f1-2a801ffc476a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4af134a4-aeb4-4996-88f1-2a801ffc476a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d63b4d9c-35b5-429c-9aee-c778f4c96463"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d63b4d9c-35b5-429c-9aee-c778f4c96463",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b06cc729-f3fb-477e-a38b-41a629df4c7d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b06cc729-f3fb-477e-a38b-41a629df4c7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "02cd6fa9-cda3-4e67-921f-5abc32d4cd5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "02cd6fa9-cda3-4e67-921f-5abc32d4cd5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:14.041 00:45:05 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81342 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81342 ']' 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81342 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81342 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.041 killing process with pid 81342 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81342' 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81342 00:13:14.041 00:45:05 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81342 00:13:14.302 00:45:06 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:14.303 00:45:06 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:14.303 00:45:06 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:14.303 00:45:06 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.303 00:45:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.303 ************************************ 00:13:14.303 START TEST bdev_hello_world 00:13:14.303 ************************************ 00:13:14.303 00:45:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:14.303 [2024-11-17 00:45:06.241006] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:14.303 [2024-11-17 00:45:06.241125] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81696 ] 00:13:14.565 [2024-11-17 00:45:06.387446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.565 [2024-11-17 00:45:06.430734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.565 [2024-11-17 00:45:06.594986] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:14.565 [2024-11-17 00:45:06.595033] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:14.565 [2024-11-17 00:45:06.595050] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:14.565 [2024-11-17 00:45:06.597105] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:14.565 [2024-11-17 00:45:06.597620] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:14.565 [2024-11-17 00:45:06.597646] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:14.565 [2024-11-17 00:45:06.598442] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:14.565 00:13:14.565 [2024-11-17 00:45:06.598489] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:14.827 00:13:14.827 real 0m0.575s 00:13:14.827 user 0m0.298s 00:13:14.827 sys 0m0.163s 00:13:14.827 ************************************ 00:13:14.827 END TEST bdev_hello_world 00:13:14.827 ************************************ 00:13:14.827 00:45:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.827 00:45:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:14.827 00:45:06 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:14.827 00:45:06 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.827 00:45:06 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.827 00:45:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.827 ************************************ 00:13:14.827 START TEST bdev_bounds 00:13:14.827 ************************************ 00:13:14.827 00:45:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:14.827 00:45:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81716 00:13:14.827 00:45:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:14.827 Process bdevio pid: 81716 00:13:14.827 00:45:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81716' 00:13:14.827 00:45:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:14.827 00:45:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81716 00:13:14.827 00:45:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81716 ']' 00:13:14.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.828 00:45:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.828 00:45:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.828 00:45:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.828 00:45:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.828 00:45:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:14.828 [2024-11-17 00:45:06.883509] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:14.828 [2024-11-17 00:45:06.883657] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81716 ] 00:13:15.089 [2024-11-17 00:45:07.031307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.089 [2024-11-17 00:45:07.087555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.089 [2024-11-17 00:45:07.087918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.089 [2024-11-17 00:45:07.087952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.034 00:45:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.034 00:45:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:16.034 00:45:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:16.034 I/O targets: 00:13:16.034 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:16.034 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:16.034 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:16.034 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:16.034 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:16.034 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:16.034 00:13:16.034 00:13:16.034 CUnit - A unit testing framework for C - Version 2.1-3 00:13:16.034 http://cunit.sourceforge.net/ 00:13:16.034 00:13:16.034 00:13:16.034 Suite: bdevio tests on: nvme3n1 00:13:16.034 Test: blockdev write read block ...passed 00:13:16.034 Test: blockdev write zeroes read block ...passed 00:13:16.034 Test: blockdev write zeroes read no split ...passed 00:13:16.034 Test: blockdev write zeroes read split ...passed 00:13:16.034 Test: blockdev write zeroes read split partial ...passed 00:13:16.034 Test: blockdev reset ...passed 00:13:16.034 Test: blockdev write read 8 blocks ...passed 00:13:16.034 Test: blockdev write read size > 128k ...passed 00:13:16.034 Test: blockdev write read invalid size ...passed 00:13:16.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.034 Test: blockdev write read max offset ...passed 00:13:16.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.034 Test: blockdev writev readv 8 blocks ...passed 00:13:16.034 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.034 Test: blockdev writev readv block ...passed 00:13:16.034 Test: blockdev writev readv size > 128k ...passed 00:13:16.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.034 Test: blockdev comparev and writev ...passed 00:13:16.034 Test: blockdev nvme passthru rw ...passed 00:13:16.034 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.034 Test: blockdev nvme admin passthru ...passed 00:13:16.034 Test: blockdev copy ...passed 00:13:16.034 Suite: bdevio tests on: nvme2n3 00:13:16.034 Test: blockdev write read block ...passed 00:13:16.034 Test: blockdev write zeroes read block ...passed 00:13:16.034 Test: blockdev write zeroes read no split ...passed 00:13:16.034 Test: blockdev write zeroes read split ...passed 00:13:16.034 Test: blockdev write zeroes read split partial ...passed 00:13:16.034 Test: blockdev reset ...passed 00:13:16.034 Test: blockdev write read 8 blocks ...passed 00:13:16.034 Test: blockdev write read size > 128k ...passed 00:13:16.034 Test: blockdev write read invalid size ...passed 00:13:16.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.034 Test: blockdev write read max offset ...passed 00:13:16.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.034 Test: blockdev writev readv 8 blocks ...passed 00:13:16.034 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.034 Test: blockdev writev readv block ...passed 00:13:16.034 Test: blockdev writev readv size > 128k ...passed 00:13:16.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.034 Test: blockdev comparev and writev ...passed 00:13:16.034 Test: blockdev nvme passthru rw ...passed 00:13:16.034 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.034 Test: blockdev nvme admin passthru ...passed 00:13:16.034 Test: blockdev copy ...passed 00:13:16.034 Suite: bdevio tests on: nvme2n2 00:13:16.034 Test: blockdev write read block ...passed 00:13:16.034 Test: blockdev write zeroes read block ...passed 00:13:16.034 Test: blockdev write zeroes read no split ...passed 00:13:16.034 Test: blockdev write zeroes read split ...passed 00:13:16.034 Test: blockdev write zeroes read split partial ...passed 00:13:16.034 Test: blockdev reset ...passed 00:13:16.034 Test: blockdev write read 8 blocks ...passed 00:13:16.034 Test: blockdev write read size > 128k ...passed 00:13:16.034 Test: blockdev write read invalid size ...passed 00:13:16.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.034 Test: blockdev write read max offset ...passed 00:13:16.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.034 Test: blockdev writev readv 8 blocks ...passed 00:13:16.034 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.034 Test: blockdev writev readv block ...passed 00:13:16.034 Test: blockdev writev readv size > 128k ...passed 00:13:16.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.034 Test: blockdev comparev and writev ...passed 00:13:16.034 Test: blockdev nvme passthru rw ...passed 00:13:16.034 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.034 Test: blockdev nvme admin passthru ...passed 00:13:16.034 Test: blockdev copy ...passed 00:13:16.034 Suite: bdevio tests on: nvme2n1 00:13:16.034 Test: blockdev write read block ...passed 00:13:16.034 Test: blockdev write zeroes read block ...passed 00:13:16.034 Test: blockdev write zeroes read no split ...passed 00:13:16.034 Test: blockdev write zeroes read split ...passed 00:13:16.034 Test: blockdev write zeroes read split partial ...passed 00:13:16.034 Test: blockdev reset ...passed 00:13:16.034 Test: blockdev write read 8 blocks ...passed 00:13:16.034 Test: blockdev write read size > 128k ...passed 00:13:16.034 Test: blockdev write read invalid size ...passed 00:13:16.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.034 Test: blockdev write read max offset ...passed 00:13:16.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.034 Test: blockdev writev readv 8 blocks ...passed 00:13:16.034 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.034 Test: blockdev writev readv block ...passed 00:13:16.034 Test: blockdev writev readv size > 128k ...passed 00:13:16.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.034 Test: blockdev comparev and writev ...passed 00:13:16.034 Test: blockdev nvme passthru rw ...passed 00:13:16.034 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.034 Test: blockdev nvme admin passthru ...passed 00:13:16.034 Test: blockdev copy ...passed 00:13:16.034 Suite: bdevio tests on: nvme1n1 00:13:16.034 Test: blockdev write read block ...passed 00:13:16.034 Test: blockdev write zeroes read block ...passed 00:13:16.034 Test: blockdev write zeroes read no split ...passed 00:13:16.034 Test: blockdev write zeroes read split ...passed 00:13:16.034 Test: blockdev write zeroes read split partial ...passed 00:13:16.034 Test: blockdev reset ...passed 00:13:16.034 Test: blockdev write read 8 blocks ...passed 00:13:16.034 Test: blockdev write read size > 128k ...passed 00:13:16.034 Test: blockdev write read invalid size ...passed 00:13:16.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.034 Test: blockdev write read max offset ...passed 00:13:16.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.034 Test: blockdev writev readv 8 blocks ...passed 00:13:16.034 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.034 Test: blockdev writev readv block ...passed 00:13:16.034 Test: blockdev writev readv size > 128k ...passed 00:13:16.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.035 Test: blockdev comparev and writev ...passed 00:13:16.035 Test: blockdev nvme passthru rw ...passed 00:13:16.035 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.035 Test: blockdev nvme admin passthru ...passed 00:13:16.035 Test: blockdev copy ...passed 00:13:16.035 Suite: bdevio tests on: nvme0n1 00:13:16.035 Test: blockdev write read block ...passed 00:13:16.035 Test: blockdev write zeroes read block ...passed 00:13:16.035 Test: blockdev write zeroes read no split ...passed 00:13:16.296 Test: blockdev write zeroes read split ...passed 00:13:16.296 Test: blockdev write zeroes read split partial ...passed 00:13:16.296 Test: blockdev reset ...passed 00:13:16.296 Test: blockdev write read 8 blocks ...passed 00:13:16.296 Test: blockdev write read size > 128k ...passed 00:13:16.296 Test: blockdev write read invalid size ...passed 00:13:16.296 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.296 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.296 Test: blockdev write read max offset ...passed 00:13:16.296 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.296 Test: blockdev writev readv 8 blocks ...passed 00:13:16.296 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.296 Test: blockdev writev readv block ...passed 00:13:16.296 Test: blockdev writev readv size > 128k ...passed 00:13:16.296 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.296 Test: blockdev comparev and writev ...passed 00:13:16.296 Test: blockdev nvme passthru rw ...passed 00:13:16.296 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.296 Test: blockdev nvme admin passthru ...passed 00:13:16.296 Test: blockdev copy ...passed 00:13:16.296 00:13:16.296 Run Summary: Type Total Ran Passed Failed Inactive 00:13:16.296 suites 6 6 n/a 0 0 00:13:16.296 tests 138 138 138 0 0 00:13:16.296 asserts 780 780 780 0 n/a 00:13:16.296 00:13:16.296 Elapsed time = 0.620 seconds 00:13:16.296 0 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81716 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81716 ']' 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81716 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81716 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.296 killing process with pid 81716 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81716' 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81716 00:13:16.296 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81716 00:13:16.558 00:45:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:16.558 00:13:16.558 real 0m1.600s 00:13:16.558 user 0m3.821s 00:13:16.558 sys 0m0.373s 00:13:16.558 ************************************ 00:13:16.558 END TEST bdev_bounds 00:13:16.558 ************************************ 00:13:16.558 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.558 00:45:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:16.558 00:45:08 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:16.558 00:45:08 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:16.558 00:45:08 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.558 00:45:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.558 ************************************ 00:13:16.558 START TEST bdev_nbd 00:13:16.558 ************************************ 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81773 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81773 /var/tmp/spdk-nbd.sock 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81773 ']' 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:16.558 00:45:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:16.558 [2024-11-17 00:45:08.571561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:16.558 [2024-11-17 00:45:08.571708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.819 [2024-11-17 00:45:08.728917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.819 [2024-11-17 00:45:08.778674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.390 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.652 1+0 records in 00:13:17.652 1+0 records out 00:13:17.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555122 s, 7.4 MB/s 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.652 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.912 1+0 records in 00:13:17.912 1+0 records out 00:13:17.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127273 s, 3.2 MB/s 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.912 00:45:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.171 1+0 records in 00:13:18.171 1+0 records out 00:13:18.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000980039 s, 4.2 MB/s 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.171 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.430 1+0 records in 00:13:18.430 1+0 records out 00:13:18.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141981 s, 2.9 MB/s 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.430 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:18.690 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:18.690 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:18.690 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:18.690 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:18.690 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.690 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.690 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.691 1+0 records in 00:13:18.691 1+0 records out 00:13:18.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00438622 s, 934 kB/s 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.691 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.951 1+0 records in 00:13:18.951 1+0 records out 00:13:18.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000892444 s, 4.6 MB/s 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.951 00:45:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd0", 00:13:19.212 "bdev_name": "nvme0n1" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd1", 00:13:19.212 "bdev_name": "nvme1n1" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd2", 00:13:19.212 "bdev_name": "nvme2n1" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd3", 00:13:19.212 "bdev_name": "nvme2n2" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd4", 00:13:19.212 "bdev_name": "nvme2n3" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd5", 00:13:19.212 "bdev_name": "nvme3n1" 00:13:19.212 } 00:13:19.212 ]' 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd0", 00:13:19.212 "bdev_name": "nvme0n1" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd1", 00:13:19.212 "bdev_name": "nvme1n1" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd2", 00:13:19.212 "bdev_name": "nvme2n1" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd3", 00:13:19.212 "bdev_name": "nvme2n2" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd4", 00:13:19.212 "bdev_name": "nvme2n3" 00:13:19.212 }, 00:13:19.212 { 00:13:19.212 "nbd_device": "/dev/nbd5", 00:13:19.212 "bdev_name": "nvme3n1" 00:13:19.212 } 00:13:19.212 ]' 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.212 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.473 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.734 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.996 00:45:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.258 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.520 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.782 00:45:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:21.044 /dev/nbd0 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.044 1+0 records in 00:13:21.044 1+0 records out 00:13:21.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977056 s, 4.2 MB/s 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.044 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:21.306 /dev/nbd1 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.306 1+0 records in 00:13:21.306 1+0 records out 00:13:21.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973867 s, 4.2 MB/s 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.306 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:21.568 /dev/nbd10 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.568 1+0 records in 00:13:21.568 1+0 records out 00:13:21.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110417 s, 3.7 MB/s 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.568 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:21.830 /dev/nbd11 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.830 1+0 records in 00:13:21.830 1+0 records out 00:13:21.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105535 s, 3.9 MB/s 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.830 00:45:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:22.092 /dev/nbd12 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.092 1+0 records in 00:13:22.092 1+0 records out 00:13:22.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688791 s, 5.9 MB/s 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:22.092 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:22.354 /dev/nbd13 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.354 1+0 records in 00:13:22.354 1+0 records out 00:13:22.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100101 s, 4.1 MB/s 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.354 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd0", 00:13:22.616 "bdev_name": "nvme0n1" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd1", 00:13:22.616 "bdev_name": "nvme1n1" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd10", 00:13:22.616 "bdev_name": "nvme2n1" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd11", 00:13:22.616 "bdev_name": "nvme2n2" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd12", 00:13:22.616 "bdev_name": "nvme2n3" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd13", 00:13:22.616 "bdev_name": "nvme3n1" 00:13:22.616 } 00:13:22.616 ]' 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd0", 00:13:22.616 "bdev_name": "nvme0n1" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd1", 00:13:22.616 "bdev_name": "nvme1n1" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd10", 00:13:22.616 "bdev_name": "nvme2n1" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd11", 00:13:22.616 "bdev_name": "nvme2n2" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd12", 00:13:22.616 "bdev_name": "nvme2n3" 00:13:22.616 }, 00:13:22.616 { 00:13:22.616 "nbd_device": "/dev/nbd13", 00:13:22.616 "bdev_name": "nvme3n1" 00:13:22.616 } 00:13:22.616 ]' 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:22.616 /dev/nbd1 00:13:22.616 /dev/nbd10 00:13:22.616 /dev/nbd11 00:13:22.616 /dev/nbd12 00:13:22.616 /dev/nbd13' 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:22.616 /dev/nbd1 00:13:22.616 /dev/nbd10 00:13:22.616 /dev/nbd11 00:13:22.616 /dev/nbd12 00:13:22.616 /dev/nbd13' 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:22.616 256+0 records in 00:13:22.616 256+0 records out 00:13:22.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00687903 s, 152 MB/s 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.616 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:22.877 256+0 records in 00:13:22.877 256+0 records out 00:13:22.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239821 s, 4.4 MB/s 00:13:22.877 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.877 00:45:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:23.136 256+0 records in 00:13:23.136 256+0 records out 00:13:23.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20713 s, 5.1 MB/s 00:13:23.137 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.137 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:23.398 256+0 records in 00:13:23.398 256+0 records out 00:13:23.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174713 s, 6.0 MB/s 00:13:23.398 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.398 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:23.660 256+0 records in 00:13:23.660 256+0 records out 00:13:23.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22956 s, 4.6 MB/s 00:13:23.660 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.660 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:23.922 256+0 records in 00:13:23.922 256+0 records out 00:13:23.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235011 s, 4.5 MB/s 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:23.922 256+0 records in 00:13:23.922 256+0 records out 00:13:23.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203293 s, 5.2 MB/s 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.922 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:24.181 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:24.181 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:24.181 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.181 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:24.181 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.182 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:24.182 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.182 00:45:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.182 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.442 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.701 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.962 00:45:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.962 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.224 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:25.485 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:25.746 malloc_lvol_verify 00:13:25.746 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:26.066 73c10c28-ff5b-40fc-b922-fa4bdc519c54 00:13:26.066 00:45:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:26.066 156fb8b6-9f89-4f4a-8161-09c44b58cec8 00:13:26.066 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:26.328 /dev/nbd0 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:26.328 mke2fs 1.47.0 (5-Feb-2023) 00:13:26.328 Discarding device blocks: 0/4096 done 00:13:26.328 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:26.328 00:13:26.328 Allocating group tables: 0/1 done 00:13:26.328 Writing inode tables: 0/1 done 00:13:26.328 Creating journal (1024 blocks): done 00:13:26.328 Writing superblocks and filesystem accounting information: 0/1 done 00:13:26.328 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.328 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81773 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81773 ']' 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81773 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81773 00:13:26.590 killing process with pid 81773 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81773' 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81773 00:13:26.590 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81773 00:13:26.851 ************************************ 00:13:26.851 END TEST bdev_nbd 00:13:26.851 ************************************ 00:13:26.851 00:45:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:26.851 00:13:26.851 real 0m10.366s 00:13:26.851 user 0m14.074s 00:13:26.851 sys 0m3.824s 00:13:26.851 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.851 00:45:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:27.113 00:45:18 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:27.113 00:45:18 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:27.113 00:45:18 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:27.113 00:45:18 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:27.113 00:45:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:27.113 00:45:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.113 00:45:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.113 ************************************ 00:13:27.113 START TEST bdev_fio 00:13:27.113 ************************************ 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:27.113 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.113 00:45:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:27.113 ************************************ 00:13:27.113 START TEST bdev_fio_rw_verify 00:13:27.113 ************************************ 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:27.113 00:45:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.373 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.373 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.373 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.373 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.373 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.373 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.373 fio-3.35 00:13:27.373 Starting 6 threads 00:13:39.615 00:13:39.615 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82173: Sun Nov 17 00:45:29 2024 00:13:39.615 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(435MiB/10002msec) 00:13:39.615 slat (usec): min=2, max=2498, avg= 7.09, stdev=20.07 00:13:39.615 clat (usec): min=87, max=8686, avg=1837.72, stdev=894.73 00:13:39.615 lat (usec): min=96, max=8704, avg=1844.81, stdev=895.35 00:13:39.615 clat percentiles (usec): 00:13:39.615 | 50.000th=[ 1713], 99.000th=[ 4686], 99.900th=[ 6390], 99.990th=[ 8586], 00:13:39.615 | 99.999th=[ 8717] 00:13:39.615 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(448MiB/10002msec); 0 zone resets 00:13:39.615 slat (usec): min=4, max=5152, avg=45.10, stdev=165.13 00:13:39.615 clat (usec): min=107, max=9883, avg=2041.70, stdev=967.47 00:13:39.615 lat (usec): min=121, max=9920, avg=2086.80, stdev=980.73 00:13:39.615 clat percentiles (usec): 00:13:39.615 | 50.000th=[ 1876], 99.000th=[ 5080], 99.900th=[ 6980], 99.990th=[ 9110], 00:13:39.615 | 99.999th=[ 9896] 00:13:39.615 bw ( KiB/s): min=36067, max=49408, per=99.74%, avg=45767.05, stdev=757.17, samples=114 00:13:39.615 iops : min= 9014, max=12352, avg=11440.68, stdev=189.26, samples=114 00:13:39.615 lat (usec) : 100=0.01%, 250=0.26%, 500=2.02%, 750=4.05%, 1000=6.86% 00:13:39.615 lat (msec) : 2=46.64%, 4=36.96%, 10=3.19% 00:13:39.615 cpu : usr=46.63%, sys=31.46%, ctx=4680, majf=0, minf=12496 00:13:39.615 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.615 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.615 issued rwts: total=111458,114743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.615 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:39.615 00:13:39.615 Run status group 0 (all jobs): 00:13:39.615 READ: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=435MiB (457MB), run=10002-10002msec 00:13:39.615 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=448MiB (470MB), run=10002-10002msec 00:13:39.615 ----------------------------------------------------- 00:13:39.615 Suppressions used: 00:13:39.615 count bytes template 00:13:39.615 6 48 /usr/src/fio/parse.c 00:13:39.615 3210 308160 /usr/src/fio/iolog.c 00:13:39.615 1 8 libtcmalloc_minimal.so 00:13:39.615 1 904 libcrypto.so 00:13:39.615 ----------------------------------------------------- 00:13:39.615 00:13:39.615 00:13:39.615 real 0m11.211s 00:13:39.615 user 0m28.703s 00:13:39.615 sys 0m19.267s 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.615 ************************************ 00:13:39.615 END TEST bdev_fio_rw_verify 00:13:39.615 ************************************ 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2291c43b-0077-4046-b401-120e0d5314e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2291c43b-0077-4046-b401-120e0d5314e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "86531951-92cd-4711-83f0-ce5bcf81494c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "86531951-92cd-4711-83f0-ce5bcf81494c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4af134a4-aeb4-4996-88f1-2a801ffc476a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4af134a4-aeb4-4996-88f1-2a801ffc476a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d63b4d9c-35b5-429c-9aee-c778f4c96463"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d63b4d9c-35b5-429c-9aee-c778f4c96463",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b06cc729-f3fb-477e-a38b-41a629df4c7d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b06cc729-f3fb-477e-a38b-41a629df4c7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "02cd6fa9-cda3-4e67-921f-5abc32d4cd5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "02cd6fa9-cda3-4e67-921f-5abc32d4cd5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.615 /home/vagrant/spdk_repo/spdk 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:39.615 00:45:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:39.615 00:13:39.616 real 0m11.387s 00:13:39.616 user 0m28.777s 00:13:39.616 sys 0m19.345s 00:13:39.616 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.616 ************************************ 00:13:39.616 END TEST bdev_fio 00:13:39.616 ************************************ 00:13:39.616 00:45:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 00:45:30 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:39.616 00:45:30 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.616 00:45:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:39.616 00:45:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.616 00:45:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 ************************************ 00:13:39.616 START TEST bdev_verify 00:13:39.616 ************************************ 00:13:39.616 00:45:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.616 [2024-11-17 00:45:30.459738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:39.616 [2024-11-17 00:45:30.459887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82336 ] 00:13:39.616 [2024-11-17 00:45:30.613854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.616 [2024-11-17 00:45:30.666611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.616 [2024-11-17 00:45:30.666683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.616 Running I/O for 5 seconds... 00:13:41.132 23840.00 IOPS, 93.12 MiB/s [2024-11-17T00:45:34.138Z] 23968.00 IOPS, 93.62 MiB/s [2024-11-17T00:45:35.079Z] 24288.00 IOPS, 94.88 MiB/s [2024-11-17T00:45:36.023Z] 24872.00 IOPS, 97.16 MiB/s [2024-11-17T00:45:36.024Z] 25152.00 IOPS, 98.25 MiB/s 00:13:43.961 Latency(us) 00:13:43.961 [2024-11-17T00:45:36.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.961 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x0 length 0xa0000 00:13:43.961 nvme0n1 : 5.01 1840.66 7.19 0.00 0.00 69391.05 9376.69 83886.08 00:13:43.961 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0xa0000 length 0xa0000 00:13:43.961 nvme0n1 : 5.02 2116.03 8.27 0.00 0.00 60377.45 7561.85 64931.05 00:13:43.961 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x0 length 0xbd0bd 00:13:43.961 nvme1n1 : 5.07 2330.62 9.10 0.00 0.00 54589.92 7309.78 57671.68 00:13:43.961 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:43.961 nvme1n1 : 5.05 2599.68 10.16 0.00 0.00 49005.34 6452.78 56865.08 00:13:43.961 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x0 length 0x80000 00:13:43.961 nvme2n1 : 5.06 1846.97 7.21 0.00 0.00 68595.31 14216.27 69770.63 00:13:43.961 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x80000 length 0x80000 00:13:43.961 nvme2n1 : 5.05 2155.09 8.42 0.00 0.00 59085.81 12098.95 65737.65 00:13:43.961 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x0 length 0x80000 00:13:43.961 nvme2n2 : 5.06 1846.18 7.21 0.00 0.00 68444.63 9326.28 78239.90 00:13:43.961 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x80000 length 0x80000 00:13:43.961 nvme2n2 : 5.07 2121.97 8.29 0.00 0.00 59786.68 13611.32 59688.17 00:13:43.961 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x0 length 0x80000 00:13:43.961 nvme2n3 : 5.08 1888.18 7.38 0.00 0.00 66841.45 3213.78 68560.74 00:13:43.961 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x80000 length 0x80000 00:13:43.961 nvme2n3 : 5.07 2119.80 8.28 0.00 0.00 59717.95 12451.84 53638.70 00:13:43.961 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x0 length 0x20000 00:13:43.961 nvme3n1 : 5.08 1865.76 7.29 0.00 0.00 67596.55 7612.26 84289.38 00:13:43.961 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.961 Verification LBA range: start 0x20000 length 0x20000 00:13:43.961 nvme3n1 : 5.08 2142.27 8.37 0.00 0.00 58990.43 2722.26 53638.70 00:13:43.961 [2024-11-17T00:45:36.024Z] =================================================================================================================== 00:13:43.961 [2024-11-17T00:45:36.024Z] Total : 24873.22 97.16 0.00 0.00 61216.47 2722.26 84289.38 00:13:44.222 00:13:44.222 real 0m5.869s 00:13:44.222 user 0m9.409s 00:13:44.222 sys 0m1.416s 00:13:44.222 00:45:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.222 00:45:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:44.222 ************************************ 00:13:44.222 END TEST bdev_verify 00:13:44.222 ************************************ 00:13:44.484 00:45:36 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:44.484 00:45:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:44.484 00:45:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.484 00:45:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.484 ************************************ 00:13:44.484 START TEST bdev_verify_big_io 00:13:44.484 ************************************ 00:13:44.484 00:45:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:44.484 [2024-11-17 00:45:36.405066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:44.484 [2024-11-17 00:45:36.405216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82424 ] 00:13:44.760 [2024-11-17 00:45:36.553078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.760 [2024-11-17 00:45:36.609558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.760 [2024-11-17 00:45:36.609670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.024 Running I/O for 5 seconds... 00:13:50.931 1520.00 IOPS, 95.00 MiB/s [2024-11-17T00:45:42.994Z] 3176.50 IOPS, 198.53 MiB/s [2024-11-17T00:45:43.255Z] 2809.00 IOPS, 175.56 MiB/s 00:13:51.192 Latency(us) 00:13:51.192 [2024-11-17T00:45:43.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.192 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.192 Verification LBA range: start 0x0 length 0xa000 00:13:51.192 nvme0n1 : 5.94 99.67 6.23 0.00 0.00 1234736.73 40329.85 1516402.22 00:13:51.192 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.192 Verification LBA range: start 0xa000 length 0xa000 00:13:51.192 nvme0n1 : 5.82 131.92 8.24 0.00 0.00 928044.24 32465.53 890483.00 00:13:51.193 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x0 length 0xbd0b 00:13:51.193 nvme1n1 : 5.94 129.21 8.08 0.00 0.00 913979.54 24500.38 1142141.24 00:13:51.193 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:51.193 nvme1n1 : 5.83 173.17 10.82 0.00 0.00 688327.75 48194.17 784012.21 00:13:51.193 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x0 length 0x8000 00:13:51.193 nvme2n1 : 5.98 128.45 8.03 0.00 0.00 891660.08 31658.93 961463.53 00:13:51.193 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x8000 length 0x8000 00:13:51.193 nvme2n1 : 5.83 129.00 8.06 0.00 0.00 893285.71 19459.15 1129235.69 00:13:51.193 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x0 length 0x8000 00:13:51.193 nvme2n2 : 6.00 74.62 4.66 0.00 0.00 1464126.68 44362.83 3213482.14 00:13:51.193 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x8000 length 0x8000 00:13:51.193 nvme2n2 : 5.85 112.17 7.01 0.00 0.00 1026492.34 68964.04 2452054.65 00:13:51.193 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x0 length 0x8000 00:13:51.193 nvme2n3 : 6.03 110.57 6.91 0.00 0.00 952566.63 25508.63 2710165.66 00:13:51.193 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x8000 length 0x8000 00:13:51.193 nvme2n3 : 5.85 164.23 10.26 0.00 0.00 688741.64 6326.74 825955.25 00:13:51.193 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x0 length 0x2000 00:13:51.193 nvme3n1 : 6.18 176.08 11.00 0.00 0.00 575649.79 472.62 2271376.94 00:13:51.193 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.193 Verification LBA range: start 0x2000 length 0x2000 00:13:51.193 nvme3n1 : 5.85 196.88 12.31 0.00 0.00 558084.51 4461.49 816276.09 00:13:51.193 [2024-11-17T00:45:43.256Z] =================================================================================================================== 00:13:51.193 [2024-11-17T00:45:43.256Z] Total : 1625.97 101.62 0.00 0.00 840443.38 472.62 3213482.14 00:13:51.454 00:13:51.454 real 0m7.010s 00:13:51.454 user 0m12.800s 00:13:51.454 sys 0m0.489s 00:13:51.454 ************************************ 00:13:51.454 END TEST bdev_verify_big_io 00:13:51.454 ************************************ 00:13:51.454 00:45:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.454 00:45:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.454 00:45:43 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.454 00:45:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:51.454 00:45:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.454 00:45:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.454 ************************************ 00:13:51.454 START TEST bdev_write_zeroes 00:13:51.454 ************************************ 00:13:51.454 00:45:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.454 [2024-11-17 00:45:43.485968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:51.454 [2024-11-17 00:45:43.486109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82528 ] 00:13:51.716 [2024-11-17 00:45:43.640936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.716 [2024-11-17 00:45:43.691178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.977 Running I/O for 1 seconds... 00:13:52.921 90464.00 IOPS, 353.38 MiB/s 00:13:52.921 Latency(us) 00:13:52.921 [2024-11-17T00:45:44.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.921 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.921 nvme0n1 : 1.02 14749.67 57.62 0.00 0.00 8668.26 5948.65 20971.52 00:13:52.921 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.921 nvme1n1 : 1.02 16102.52 62.90 0.00 0.00 7923.17 5394.12 16232.76 00:13:52.921 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.921 nvme2n1 : 1.02 14729.59 57.54 0.00 0.00 8618.62 4587.52 18955.03 00:13:52.921 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.921 nvme2n2 : 1.02 14712.56 57.47 0.00 0.00 8622.67 4562.31 18955.03 00:13:52.921 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.921 nvme2n3 : 1.02 14695.35 57.40 0.00 0.00 8626.90 4637.93 19257.50 00:13:52.921 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.921 nvme3n1 : 1.02 14677.39 57.33 0.00 0.00 8631.12 4688.34 19660.80 00:13:52.921 [2024-11-17T00:45:44.984Z] =================================================================================================================== 00:13:52.921 [2024-11-17T00:45:44.984Z] Total : 89667.07 350.26 0.00 0.00 8505.58 4562.31 20971.52 00:13:53.183 00:13:53.183 real 0m1.755s 00:13:53.183 user 0m1.097s 00:13:53.183 sys 0m0.478s 00:13:53.183 00:45:45 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.183 ************************************ 00:13:53.183 END TEST bdev_write_zeroes 00:13:53.183 ************************************ 00:13:53.183 00:45:45 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:53.183 00:45:45 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.183 00:45:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:53.183 00:45:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.183 00:45:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.183 ************************************ 00:13:53.183 START TEST bdev_json_nonenclosed 00:13:53.183 ************************************ 00:13:53.183 00:45:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.444 [2024-11-17 00:45:45.310725] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:53.444 [2024-11-17 00:45:45.310864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82565 ] 00:13:53.444 [2024-11-17 00:45:45.460736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.705 [2024-11-17 00:45:45.512981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.705 [2024-11-17 00:45:45.513111] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:53.705 [2024-11-17 00:45:45.513132] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:53.705 [2024-11-17 00:45:45.513145] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.705 00:13:53.705 real 0m0.381s 00:13:53.705 user 0m0.158s 00:13:53.705 sys 0m0.118s 00:13:53.705 00:45:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.705 ************************************ 00:13:53.705 END TEST bdev_json_nonenclosed 00:13:53.705 ************************************ 00:13:53.705 00:45:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:53.705 00:45:45 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.706 00:45:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:53.706 00:45:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.706 00:45:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.706 ************************************ 00:13:53.706 START TEST bdev_json_nonarray 00:13:53.706 ************************************ 00:13:53.706 00:45:45 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.706 [2024-11-17 00:45:45.754462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:53.706 [2024-11-17 00:45:45.754608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82590 ] 00:13:53.967 [2024-11-17 00:45:45.907063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.967 [2024-11-17 00:45:45.958986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.967 [2024-11-17 00:45:45.959125] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:53.967 [2024-11-17 00:45:45.959147] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:53.967 [2024-11-17 00:45:45.959160] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:54.229 00:13:54.229 real 0m0.383s 00:13:54.229 user 0m0.156s 00:13:54.229 sys 0m0.118s 00:13:54.229 00:45:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.229 ************************************ 00:13:54.229 END TEST bdev_json_nonarray 00:13:54.229 ************************************ 00:13:54.229 00:45:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:54.229 00:45:46 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:54.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:55.747 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.692 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.692 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.692 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.692 00:13:56.692 real 0m49.637s 00:13:56.692 user 1m18.613s 00:13:56.692 sys 0m31.768s 00:13:56.692 00:45:48 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.692 ************************************ 00:13:56.692 END TEST blockdev_xnvme 00:13:56.692 00:45:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.692 ************************************ 00:13:56.692 00:45:48 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:56.692 00:45:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.692 00:45:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.692 00:45:48 -- common/autotest_common.sh@10 -- # set +x 00:13:56.692 ************************************ 00:13:56.692 START TEST ublk 00:13:56.692 ************************************ 00:13:56.692 00:45:48 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:56.692 * Looking for test storage... 00:13:56.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:56.692 00:45:48 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:56.692 00:45:48 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:56.692 00:45:48 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.953 00:45:48 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.953 00:45:48 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.953 00:45:48 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.953 00:45:48 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.953 00:45:48 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.953 00:45:48 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.953 00:45:48 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.953 00:45:48 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:56.953 00:45:48 ublk -- scripts/common.sh@345 -- # : 1 00:13:56.953 00:45:48 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.953 00:45:48 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.953 00:45:48 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:56.953 00:45:48 ublk -- scripts/common.sh@353 -- # local d=1 00:13:56.953 00:45:48 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.953 00:45:48 ublk -- scripts/common.sh@355 -- # echo 1 00:13:56.953 00:45:48 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.953 00:45:48 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@353 -- # local d=2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.953 00:45:48 ublk -- scripts/common.sh@355 -- # echo 2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.953 00:45:48 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.953 00:45:48 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.953 00:45:48 ublk -- scripts/common.sh@368 -- # return 0 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:56.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.953 --rc genhtml_branch_coverage=1 00:13:56.953 --rc genhtml_function_coverage=1 00:13:56.953 --rc genhtml_legend=1 00:13:56.953 --rc geninfo_all_blocks=1 00:13:56.953 --rc geninfo_unexecuted_blocks=1 00:13:56.953 00:13:56.953 ' 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:56.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.953 --rc genhtml_branch_coverage=1 00:13:56.953 --rc genhtml_function_coverage=1 00:13:56.953 --rc genhtml_legend=1 00:13:56.953 --rc geninfo_all_blocks=1 00:13:56.953 --rc geninfo_unexecuted_blocks=1 00:13:56.953 00:13:56.953 ' 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:56.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.953 --rc genhtml_branch_coverage=1 00:13:56.953 --rc genhtml_function_coverage=1 00:13:56.953 --rc genhtml_legend=1 00:13:56.953 --rc geninfo_all_blocks=1 00:13:56.953 --rc geninfo_unexecuted_blocks=1 00:13:56.953 00:13:56.953 ' 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:56.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.953 --rc genhtml_branch_coverage=1 00:13:56.953 --rc genhtml_function_coverage=1 00:13:56.953 --rc genhtml_legend=1 00:13:56.953 --rc geninfo_all_blocks=1 00:13:56.953 --rc geninfo_unexecuted_blocks=1 00:13:56.953 00:13:56.953 ' 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:56.953 00:45:48 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:56.953 00:45:48 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:56.953 00:45:48 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:56.953 00:45:48 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:56.953 00:45:48 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:56.953 00:45:48 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:56.953 00:45:48 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:56.953 00:45:48 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:56.953 00:45:48 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.953 00:45:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.953 ************************************ 00:13:56.953 START TEST test_save_ublk_config 00:13:56.953 ************************************ 00:13:56.953 00:45:48 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:56.953 00:45:48 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:56.953 00:45:48 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82877 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82877 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82877 ']' 00:13:56.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:56.954 00:45:48 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:56.954 [2024-11-17 00:45:48.905457] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:56.954 [2024-11-17 00:45:48.905611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82877 ] 00:13:57.214 [2024-11-17 00:45:49.061789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.214 [2024-11-17 00:45:49.124653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.787 [2024-11-17 00:45:49.748385] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:57.787 [2024-11-17 00:45:49.748787] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:57.787 malloc0 00:13:57.787 [2024-11-17 00:45:49.780588] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:57.787 [2024-11-17 00:45:49.780699] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:57.787 [2024-11-17 00:45:49.780712] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:57.787 [2024-11-17 00:45:49.780725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:57.787 [2024-11-17 00:45:49.789497] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:57.787 [2024-11-17 00:45:49.789543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:57.787 [2024-11-17 00:45:49.796391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:57.787 [2024-11-17 00:45:49.796521] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:57.787 [2024-11-17 00:45:49.813392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:57.787 0 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.787 00:45:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:58.049 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.049 00:45:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:58.049 "subsystems": [ 00:13:58.049 { 00:13:58.049 "subsystem": "fsdev", 00:13:58.049 "config": [ 00:13:58.049 { 00:13:58.049 "method": "fsdev_set_opts", 00:13:58.049 "params": { 00:13:58.049 "fsdev_io_pool_size": 65535, 00:13:58.049 "fsdev_io_cache_size": 256 00:13:58.049 } 00:13:58.049 } 00:13:58.049 ] 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "subsystem": "keyring", 00:13:58.049 "config": [] 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "subsystem": "iobuf", 00:13:58.049 "config": [ 00:13:58.049 { 00:13:58.049 "method": "iobuf_set_options", 00:13:58.049 "params": { 00:13:58.049 "small_pool_count": 8192, 00:13:58.049 "large_pool_count": 1024, 00:13:58.049 "small_bufsize": 8192, 00:13:58.049 "large_bufsize": 135168 00:13:58.049 } 00:13:58.049 } 00:13:58.049 ] 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "subsystem": "sock", 00:13:58.049 "config": [ 00:13:58.049 { 00:13:58.049 "method": "sock_set_default_impl", 00:13:58.049 "params": { 00:13:58.049 "impl_name": "posix" 00:13:58.049 } 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "method": "sock_impl_set_options", 00:13:58.049 "params": { 00:13:58.049 "impl_name": "ssl", 00:13:58.049 "recv_buf_size": 4096, 00:13:58.049 "send_buf_size": 4096, 00:13:58.049 "enable_recv_pipe": true, 00:13:58.049 "enable_quickack": false, 00:13:58.049 "enable_placement_id": 0, 00:13:58.049 "enable_zerocopy_send_server": true, 00:13:58.049 "enable_zerocopy_send_client": false, 00:13:58.049 "zerocopy_threshold": 0, 00:13:58.049 "tls_version": 0, 00:13:58.049 "enable_ktls": false 00:13:58.049 } 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "method": "sock_impl_set_options", 00:13:58.049 "params": { 00:13:58.049 "impl_name": "posix", 00:13:58.049 "recv_buf_size": 2097152, 00:13:58.049 "send_buf_size": 2097152, 00:13:58.049 "enable_recv_pipe": true, 00:13:58.049 "enable_quickack": false, 00:13:58.049 "enable_placement_id": 0, 00:13:58.049 "enable_zerocopy_send_server": true, 00:13:58.049 "enable_zerocopy_send_client": false, 00:13:58.049 "zerocopy_threshold": 0, 00:13:58.049 "tls_version": 0, 00:13:58.049 "enable_ktls": false 00:13:58.049 } 00:13:58.049 } 00:13:58.049 ] 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "subsystem": "vmd", 00:13:58.049 "config": [] 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "subsystem": "accel", 00:13:58.049 "config": [ 00:13:58.049 { 00:13:58.049 "method": "accel_set_options", 00:13:58.049 "params": { 00:13:58.049 "small_cache_size": 128, 00:13:58.049 "large_cache_size": 16, 00:13:58.049 "task_count": 2048, 00:13:58.049 "sequence_count": 2048, 00:13:58.049 "buf_count": 2048 00:13:58.049 } 00:13:58.049 } 00:13:58.049 ] 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "subsystem": "bdev", 00:13:58.049 "config": [ 00:13:58.049 { 00:13:58.049 "method": "bdev_set_options", 00:13:58.049 "params": { 00:13:58.049 "bdev_io_pool_size": 65535, 00:13:58.049 "bdev_io_cache_size": 256, 00:13:58.049 "bdev_auto_examine": true, 00:13:58.049 "iobuf_small_cache_size": 128, 00:13:58.049 "iobuf_large_cache_size": 16 00:13:58.049 } 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "method": "bdev_raid_set_options", 00:13:58.049 "params": { 00:13:58.049 "process_window_size_kb": 1024, 00:13:58.049 "process_max_bandwidth_mb_sec": 0 00:13:58.049 } 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "method": "bdev_iscsi_set_options", 00:13:58.049 "params": { 00:13:58.049 "timeout_sec": 30 00:13:58.049 } 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "method": "bdev_nvme_set_options", 00:13:58.049 "params": { 00:13:58.049 "action_on_timeout": "none", 00:13:58.049 "timeout_us": 0, 00:13:58.049 "timeout_admin_us": 0, 00:13:58.049 "keep_alive_timeout_ms": 10000, 00:13:58.049 "arbitration_burst": 0, 00:13:58.049 "low_priority_weight": 0, 00:13:58.049 "medium_priority_weight": 0, 00:13:58.049 "high_priority_weight": 0, 00:13:58.049 "nvme_adminq_poll_period_us": 10000, 00:13:58.049 "nvme_ioq_poll_period_us": 0, 00:13:58.049 "io_queue_requests": 0, 00:13:58.049 "delay_cmd_submit": true, 00:13:58.049 "transport_retry_count": 4, 00:13:58.049 "bdev_retry_count": 3, 00:13:58.049 "transport_ack_timeout": 0, 00:13:58.049 "ctrlr_loss_timeout_sec": 0, 00:13:58.049 "reconnect_delay_sec": 0, 00:13:58.049 "fast_io_fail_timeout_sec": 0, 00:13:58.049 "disable_auto_failback": false, 00:13:58.049 "generate_uuids": false, 00:13:58.049 "transport_tos": 0, 00:13:58.049 "nvme_error_stat": false, 00:13:58.049 "rdma_srq_size": 0, 00:13:58.049 "io_path_stat": false, 00:13:58.049 "allow_accel_sequence": false, 00:13:58.049 "rdma_max_cq_size": 0, 00:13:58.049 "rdma_cm_event_timeout_ms": 0, 00:13:58.049 "dhchap_digests": [ 00:13:58.049 "sha256", 00:13:58.049 "sha384", 00:13:58.049 "sha512" 00:13:58.049 ], 00:13:58.049 "dhchap_dhgroups": [ 00:13:58.049 "null", 00:13:58.049 "ffdhe2048", 00:13:58.049 "ffdhe3072", 00:13:58.049 "ffdhe4096", 00:13:58.049 "ffdhe6144", 00:13:58.049 "ffdhe8192" 00:13:58.049 ] 00:13:58.049 } 00:13:58.049 }, 00:13:58.049 { 00:13:58.049 "method": "bdev_nvme_set_hotplug", 00:13:58.049 "params": { 00:13:58.050 "period_us": 100000, 00:13:58.050 "enable": false 00:13:58.050 } 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "method": "bdev_malloc_create", 00:13:58.050 "params": { 00:13:58.050 "name": "malloc0", 00:13:58.050 "num_blocks": 8192, 00:13:58.050 "block_size": 4096, 00:13:58.050 "physical_block_size": 4096, 00:13:58.050 "uuid": "814175d2-7a58-4f50-ac9a-255e2a497f56", 00:13:58.050 "optimal_io_boundary": 0, 00:13:58.050 "md_size": 0, 00:13:58.050 "dif_type": 0, 00:13:58.050 "dif_is_head_of_md": false, 00:13:58.050 "dif_pi_format": 0 00:13:58.050 } 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "method": "bdev_wait_for_examine" 00:13:58.050 } 00:13:58.050 ] 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "scsi", 00:13:58.050 "config": null 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "scheduler", 00:13:58.050 "config": [ 00:13:58.050 { 00:13:58.050 "method": "framework_set_scheduler", 00:13:58.050 "params": { 00:13:58.050 "name": "static" 00:13:58.050 } 00:13:58.050 } 00:13:58.050 ] 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "vhost_scsi", 00:13:58.050 "config": [] 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "vhost_blk", 00:13:58.050 "config": [] 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "ublk", 00:13:58.050 "config": [ 00:13:58.050 { 00:13:58.050 "method": "ublk_create_target", 00:13:58.050 "params": { 00:13:58.050 "cpumask": "1" 00:13:58.050 } 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "method": "ublk_start_disk", 00:13:58.050 "params": { 00:13:58.050 "bdev_name": "malloc0", 00:13:58.050 "ublk_id": 0, 00:13:58.050 "num_queues": 1, 00:13:58.050 "queue_depth": 128 00:13:58.050 } 00:13:58.050 } 00:13:58.050 ] 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "nbd", 00:13:58.050 "config": [] 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "nvmf", 00:13:58.050 "config": [ 00:13:58.050 { 00:13:58.050 "method": "nvmf_set_config", 00:13:58.050 "params": { 00:13:58.050 "discovery_filter": "match_any", 00:13:58.050 "admin_cmd_passthru": { 00:13:58.050 "identify_ctrlr": false 00:13:58.050 }, 00:13:58.050 "dhchap_digests": [ 00:13:58.050 "sha256", 00:13:58.050 "sha384", 00:13:58.050 "sha512" 00:13:58.050 ], 00:13:58.050 "dhchap_dhgroups": [ 00:13:58.050 "null", 00:13:58.050 "ffdhe2048", 00:13:58.050 "ffdhe3072", 00:13:58.050 "ffdhe4096", 00:13:58.050 "ffdhe6144", 00:13:58.050 "ffdhe8192" 00:13:58.050 ] 00:13:58.050 } 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "method": "nvmf_set_max_subsystems", 00:13:58.050 "params": { 00:13:58.050 "max_subsystems": 1024 00:13:58.050 } 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "method": "nvmf_set_crdt", 00:13:58.050 "params": { 00:13:58.050 "crdt1": 0, 00:13:58.050 "crdt2": 0, 00:13:58.050 "crdt3": 0 00:13:58.050 } 00:13:58.050 } 00:13:58.050 ] 00:13:58.050 }, 00:13:58.050 { 00:13:58.050 "subsystem": "iscsi", 00:13:58.050 "config": [ 00:13:58.050 { 00:13:58.050 "method": "iscsi_set_options", 00:13:58.050 "params": { 00:13:58.050 "node_base": "iqn.2016-06.io.spdk", 00:13:58.050 "max_sessions": 128, 00:13:58.050 "max_connections_per_session": 2, 00:13:58.050 "max_queue_depth": 64, 00:13:58.050 "default_time2wait": 2, 00:13:58.050 "default_time2retain": 20, 00:13:58.050 "first_burst_length": 8192, 00:13:58.050 "immediate_data": true, 00:13:58.050 "allow_duplicated_isid": false, 00:13:58.050 "error_recovery_level": 0, 00:13:58.050 "nop_timeout": 60, 00:13:58.050 "nop_in_interval": 30, 00:13:58.050 "disable_chap": false, 00:13:58.050 "require_chap": false, 00:13:58.050 "mutual_chap": false, 00:13:58.050 "chap_group": 0, 00:13:58.050 "max_large_datain_per_connection": 64, 00:13:58.050 "max_r2t_per_connection": 4, 00:13:58.050 "pdu_pool_size": 36864, 00:13:58.050 "immediate_data_pool_size": 16384, 00:13:58.050 "data_out_pool_size": 2048 00:13:58.050 } 00:13:58.050 } 00:13:58.050 ] 00:13:58.050 } 00:13:58.050 ] 00:13:58.050 }' 00:13:58.050 00:45:50 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82877 00:13:58.050 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82877 ']' 00:13:58.050 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82877 00:13:58.050 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:58.050 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.050 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82877 00:13:58.312 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.312 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.312 killing process with pid 82877 00:13:58.312 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82877' 00:13:58.312 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82877 00:13:58.312 00:45:50 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82877 00:13:58.573 [2024-11-17 00:45:50.431835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:58.573 [2024-11-17 00:45:50.467410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:58.573 [2024-11-17 00:45:50.467560] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:58.573 [2024-11-17 00:45:50.475393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:58.573 [2024-11-17 00:45:50.475469] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:58.574 [2024-11-17 00:45:50.475478] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:58.574 [2024-11-17 00:45:50.475508] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:58.574 [2024-11-17 00:45:50.475662] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:59.147 00:45:50 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82915 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82915 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82915 ']' 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:59.147 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:59.147 "subsystems": [ 00:13:59.147 { 00:13:59.147 "subsystem": "fsdev", 00:13:59.147 "config": [ 00:13:59.147 { 00:13:59.147 "method": "fsdev_set_opts", 00:13:59.147 "params": { 00:13:59.147 "fsdev_io_pool_size": 65535, 00:13:59.147 "fsdev_io_cache_size": 256 00:13:59.147 } 00:13:59.147 } 00:13:59.147 ] 00:13:59.147 }, 00:13:59.147 { 00:13:59.147 "subsystem": "keyring", 00:13:59.147 "config": [] 00:13:59.147 }, 00:13:59.147 { 00:13:59.147 "subsystem": "iobuf", 00:13:59.147 "config": [ 00:13:59.147 { 00:13:59.147 "method": "iobuf_set_options", 00:13:59.147 "params": { 00:13:59.148 "small_pool_count": 8192, 00:13:59.148 "large_pool_count": 1024, 00:13:59.148 "small_bufsize": 8192, 00:13:59.148 "large_bufsize": 135168 00:13:59.148 } 00:13:59.148 } 00:13:59.148 ] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "sock", 00:13:59.148 "config": [ 00:13:59.148 { 00:13:59.148 "method": "sock_set_default_impl", 00:13:59.148 "params": { 00:13:59.148 "impl_name": "posix" 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "sock_impl_set_options", 00:13:59.148 "params": { 00:13:59.148 "impl_name": "ssl", 00:13:59.148 "recv_buf_size": 4096, 00:13:59.148 "send_buf_size": 4096, 00:13:59.148 "enable_recv_pipe": true, 00:13:59.148 "enable_quickack": false, 00:13:59.148 "enable_placement_id": 0, 00:13:59.148 "enable_zerocopy_send_server": true, 00:13:59.148 "enable_zerocopy_send_client": false, 00:13:59.148 "zerocopy_threshold": 0, 00:13:59.148 "tls_version": 0, 00:13:59.148 "enable_ktls": false 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "sock_impl_set_options", 00:13:59.148 "params": { 00:13:59.148 "impl_name": "posix", 00:13:59.148 "recv_buf_size": 2097152, 00:13:59.148 "send_buf_size": 2097152, 00:13:59.148 "enable_recv_pipe": true, 00:13:59.148 "enable_quickack": false, 00:13:59.148 "enable_placement_id": 0, 00:13:59.148 "enable_zerocopy_send_server": true, 00:13:59.148 "enable_zerocopy_send_client": false, 00:13:59.148 "zerocopy_threshold": 0, 00:13:59.148 "tls_version": 0, 00:13:59.148 "enable_ktls": false 00:13:59.148 } 00:13:59.148 } 00:13:59.148 ] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "vmd", 00:13:59.148 "config": [] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "accel", 00:13:59.148 "config": [ 00:13:59.148 { 00:13:59.148 "method": "accel_set_options", 00:13:59.148 "params": { 00:13:59.148 "small_cache_size": 128, 00:13:59.148 "large_cache_size": 16, 00:13:59.148 "task_count": 2048, 00:13:59.148 "sequence_count": 2048, 00:13:59.148 "buf_count": 2048 00:13:59.148 } 00:13:59.148 } 00:13:59.148 ] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "bdev", 00:13:59.148 "config": [ 00:13:59.148 { 00:13:59.148 "method": "bdev_set_options", 00:13:59.148 "params": { 00:13:59.148 "bdev_io_pool_size": 65535, 00:13:59.148 "bdev_io_cache_size": 256, 00:13:59.148 "bdev_auto_examine": true, 00:13:59.148 "iobuf_small_cache_size": 128, 00:13:59.148 "iobuf_large_cache_size": 16 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "bdev_raid_set_options", 00:13:59.148 "params": { 00:13:59.148 "process_window_size_kb": 1024, 00:13:59.148 "process_max_bandwidth_mb_sec": 0 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "bdev_iscsi_set_options", 00:13:59.148 "params": { 00:13:59.148 "timeout_sec": 30 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "bdev_nvme_set_options", 00:13:59.148 "params": { 00:13:59.148 "action_on_timeout": "none", 00:13:59.148 "timeout_us": 0, 00:13:59.148 "timeout_admin_us": 0, 00:13:59.148 "keep_alive_timeout_ms": 10000, 00:13:59.148 "arbitration_burst": 0, 00:13:59.148 "low_priority_weight": 0, 00:13:59.148 "medium_priority_weight": 0, 00:13:59.148 "high_priority_weight": 0, 00:13:59.148 "nvme_adminq_poll_period_us": 10000, 00:13:59.148 "nvme_ioq_poll_period_us": 0, 00:13:59.148 "io_queue_requests": 0, 00:13:59.148 "delay_cmd_submit": true, 00:13:59.148 "transport_retry_count": 4, 00:13:59.148 "bdev_retry_count": 3, 00:13:59.148 "transport_ack_timeout": 0, 00:13:59.148 "ctrlr_loss_timeout_sec": 0, 00:13:59.148 "reconnect_delay_sec": 0, 00:13:59.148 "fast_io_fail_timeout_sec": 0, 00:13:59.148 "disable_auto_failback": false, 00:13:59.148 "generate_uuids": false, 00:13:59.148 "transport_tos": 0, 00:13:59.148 "nvme_error_stat": false, 00:13:59.148 "rdma_srq_size": 0, 00:13:59.148 "io_path_stat": false, 00:13:59.148 "allow_accel_sequence": false, 00:13:59.148 "rdma_max_cq_size": 0, 00:13:59.148 "rdma_cm_event_timeout_ms": 0, 00:13:59.148 "dhchap_digests": [ 00:13:59.148 "sha256", 00:13:59.148 "sha384", 00:13:59.148 "sha512" 00:13:59.148 ], 00:13:59.148 "dhchap_dhgroups": [ 00:13:59.148 "null", 00:13:59.148 "ffdhe2048", 00:13:59.148 "ffdhe3072", 00:13:59.148 "ffdhe4096", 00:13:59.148 "ffdhe6144", 00:13:59.148 "ffdhe8192" 00:13:59.148 ] 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "bdev_nvme_set_hotplug", 00:13:59.148 "params": { 00:13:59.148 "period_us": 100000, 00:13:59.148 "enable": false 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "bdev_malloc_create", 00:13:59.148 "params": { 00:13:59.148 "name": "malloc0", 00:13:59.148 "num_blocks": 8192, 00:13:59.148 "block_size": 4096, 00:13:59.148 "physical_block_size": 4096, 00:13:59.148 "uuid": "814175d2-7a58-4f50-ac9a-255e2a497f56", 00:13:59.148 "optimal_io_boundary": 0, 00:13:59.148 "md_size": 0, 00:13:59.148 "dif_type": 0, 00:13:59.148 "dif_is_head_of_md": false, 00:13:59.148 "dif_pi_format": 0 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "bdev_wait_for_examine" 00:13:59.148 } 00:13:59.148 ] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "scsi", 00:13:59.148 "config": null 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "scheduler", 00:13:59.148 "config": [ 00:13:59.148 { 00:13:59.148 "method": "framework_set_scheduler", 00:13:59.148 "params": { 00:13:59.148 "name": "static" 00:13:59.148 } 00:13:59.148 } 00:13:59.148 ] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "vhost_scsi", 00:13:59.148 "config": [] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "vhost_blk", 00:13:59.148 "config": [] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "ublk", 00:13:59.148 "config": [ 00:13:59.148 { 00:13:59.148 "method": "ublk_create_target", 00:13:59.148 "params": { 00:13:59.148 "cpumask": "1" 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "ublk_start_disk", 00:13:59.148 "params": { 00:13:59.148 "bdev_name": "malloc0", 00:13:59.148 "ublk_id": 0, 00:13:59.148 "num_queues": 1, 00:13:59.148 "queue_depth": 128 00:13:59.148 } 00:13:59.148 } 00:13:59.148 ] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "nbd", 00:13:59.148 "config": [] 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "subsystem": "nvmf", 00:13:59.148 "config": [ 00:13:59.148 { 00:13:59.148 "method": "nvmf_set_config", 00:13:59.148 "params": { 00:13:59.148 "discovery_filter": "match_any", 00:13:59.148 "admin_cmd_passthru": { 00:13:59.148 "identify_ctrlr": false 00:13:59.148 }, 00:13:59.148 "dhchap_digests": [ 00:13:59.148 "sha256", 00:13:59.148 "sha384", 00:13:59.148 "sha512" 00:13:59.148 ], 00:13:59.148 "dhchap_dhgroups": [ 00:13:59.148 "null", 00:13:59.148 "ffdhe2048", 00:13:59.148 "ffdhe3072", 00:13:59.148 "ffdhe4096", 00:13:59.148 "ffdhe6144", 00:13:59.148 "ffdhe8192" 00:13:59.148 ] 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "nvmf_set_max_subsystems", 00:13:59.148 "params": { 00:13:59.148 "max_subsystems": 1024 00:13:59.148 } 00:13:59.148 }, 00:13:59.148 { 00:13:59.148 "method": "nvmf_set_crdt", 00:13:59.148 "params": { 00:13:59.148 "crdt1": 0, 00:13:59.148 "crdt2": 0, 00:13:59.148 "crdt3": 0 00:13:59.149 } 00:13:59.149 } 00:13:59.149 ] 00:13:59.149 }, 00:13:59.149 { 00:13:59.149 "subsystem": "iscsi", 00:13:59.149 "config": [ 00:13:59.149 { 00:13:59.149 "method": "iscsi_set_options", 00:13:59.149 "params": { 00:13:59.149 "node_base": "iqn.2016-06.io.spdk", 00:13:59.149 "max_sessions": 128, 00:13:59.149 "max_connections_per_session": 2, 00:13:59.149 "max_queue_depth": 64, 00:13:59.149 "default_time2wait": 2, 00:13:59.149 "default_time2retain": 20, 00:13:59.149 "first_burst_length": 8192, 00:13:59.149 "immediate_data": true, 00:13:59.149 "allow_duplicated_isid": false, 00:13:59.149 "error_recovery_level": 0, 00:13:59.149 "nop_timeout": 60, 00:13:59.149 "nop_in_interval": 30, 00:13:59.149 "disable_chap": false, 00:13:59.149 "require_chap": false, 00:13:59.149 "mutual_chap": false, 00:13:59.149 "chap_group": 0, 00:13:59.149 "max_large_datain_per_connection": 64, 00:13:59.149 "max_r2t_per_connection": 4, 00:13:59.149 "pdu_pool_size": 36864, 00:13:59.149 "immediate_data_pool_size": 16384, 00:13:59.149 "data_out_pool_size": 2048 00:13:59.149 } 00:13:59.149 } 00:13:59.149 ] 00:13:59.149 } 00:13:59.149 ] 00:13:59.149 }' 00:13:59.149 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:59.149 [2024-11-17 00:45:51.089292] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:59.149 [2024-11-17 00:45:51.089464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82915 ] 00:13:59.410 [2024-11-17 00:45:51.239315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.410 [2024-11-17 00:45:51.297732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.672 [2024-11-17 00:45:51.662378] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:59.672 [2024-11-17 00:45:51.662717] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:59.672 [2024-11-17 00:45:51.670529] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:59.672 [2024-11-17 00:45:51.670616] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:59.672 [2024-11-17 00:45:51.670628] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:59.672 [2024-11-17 00:45:51.670635] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:59.672 [2024-11-17 00:45:51.679494] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:59.672 [2024-11-17 00:45:51.679523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:59.672 [2024-11-17 00:45:51.686397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:59.672 [2024-11-17 00:45:51.686517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:59.672 [2024-11-17 00:45:51.703391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82915 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82915 ']' 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82915 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.934 00:45:51 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82915 00:14:00.196 killing process with pid 82915 00:14:00.196 00:45:52 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.196 00:45:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.196 00:45:52 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82915' 00:14:00.196 00:45:52 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82915 00:14:00.196 00:45:52 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82915 00:14:00.458 [2024-11-17 00:45:52.304253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.458 [2024-11-17 00:45:52.340484] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.458 [2024-11-17 00:45:52.340673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.458 [2024-11-17 00:45:52.349392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.458 [2024-11-17 00:45:52.349456] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:00.458 [2024-11-17 00:45:52.349464] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:00.458 [2024-11-17 00:45:52.349496] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:00.458 [2024-11-17 00:45:52.349653] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:01.032 00:45:52 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:01.032 00:14:01.032 real 0m4.076s 00:14:01.032 user 0m2.735s 00:14:01.032 sys 0m1.994s 00:14:01.032 00:45:52 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.032 00:45:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:01.032 ************************************ 00:14:01.032 END TEST test_save_ublk_config 00:14:01.032 ************************************ 00:14:01.032 00:45:52 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82971 00:14:01.032 00:45:52 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.032 00:45:52 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82971 00:14:01.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.032 00:45:52 ublk -- common/autotest_common.sh@831 -- # '[' -z 82971 ']' 00:14:01.032 00:45:52 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.032 00:45:52 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.032 00:45:52 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.032 00:45:52 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.032 00:45:52 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:01.032 00:45:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.032 [2024-11-17 00:45:53.022250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:01.032 [2024-11-17 00:45:53.022810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82971 ] 00:14:01.294 [2024-11-17 00:45:53.182078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:01.294 [2024-11-17 00:45:53.233257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.294 [2024-11-17 00:45:53.233373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.868 00:45:53 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.868 00:45:53 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:01.868 00:45:53 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:01.868 00:45:53 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:01.868 00:45:53 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.868 00:45:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.868 ************************************ 00:14:01.868 START TEST test_create_ublk 00:14:01.868 ************************************ 00:14:01.868 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:01.868 00:45:53 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:01.868 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.868 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.868 [2024-11-17 00:45:53.897384] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:01.868 [2024-11-17 00:45:53.899203] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.868 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.868 00:45:53 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:01.868 00:45:53 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:01.868 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.868 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.130 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.130 00:45:53 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:02.130 00:45:53 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:02.130 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.130 00:45:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.130 [2024-11-17 00:45:53.985557] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:02.130 [2024-11-17 00:45:53.986024] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:02.130 [2024-11-17 00:45:53.986056] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:02.130 [2024-11-17 00:45:53.986065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.130 [2024-11-17 00:45:53.997398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.130 [2024-11-17 00:45:53.997431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.130 [2024-11-17 00:45:54.008392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.130 [2024-11-17 00:45:54.009134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:02.130 [2024-11-17 00:45:54.029409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.130 00:45:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:02.130 00:45:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.130 00:45:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.130 00:45:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:02.130 { 00:14:02.130 "ublk_device": "/dev/ublkb0", 00:14:02.130 "id": 0, 00:14:02.130 "queue_depth": 512, 00:14:02.130 "num_queues": 4, 00:14:02.130 "bdev_name": "Malloc0" 00:14:02.130 } 00:14:02.130 ]' 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:02.130 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:02.392 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:02.392 00:45:54 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:02.392 00:45:54 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:02.392 fio: verification read phase will never start because write phase uses all of runtime 00:14:02.392 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:02.392 fio-3.35 00:14:02.392 Starting 1 process 00:14:14.606 00:14:14.606 fio_test: (groupid=0, jobs=1): err= 0: pid=83010: Sun Nov 17 00:46:04 2024 00:14:14.606 write: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(576MiB/10001msec); 0 zone resets 00:14:14.606 clat (usec): min=42, max=4203, avg=67.09, stdev=98.55 00:14:14.606 lat (usec): min=42, max=4221, avg=67.52, stdev=98.57 00:14:14.606 clat percentiles (usec): 00:14:14.606 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:14:14.606 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:14:14.606 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 77], 00:14:14.606 | 99.00th=[ 115], 99.50th=[ 131], 99.90th=[ 2147], 99.95th=[ 2835], 00:14:14.606 | 99.99th=[ 3490] 00:14:14.606 bw ( KiB/s): min=41888, max=63704, per=100.00%, avg=59003.79, stdev=5121.57, samples=19 00:14:14.606 iops : min=10472, max=15926, avg=14750.95, stdev=1280.39, samples=19 00:14:14.606 lat (usec) : 50=0.25%, 100=97.68%, 250=1.80%, 500=0.09%, 750=0.01% 00:14:14.606 lat (usec) : 1000=0.01% 00:14:14.606 lat (msec) : 2=0.05%, 4=0.11%, 10=0.01% 00:14:14.606 cpu : usr=2.22%, sys=13.81%, ctx=147404, majf=0, minf=795 00:14:14.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.606 issued rwts: total=0,147403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.606 00:14:14.606 Run status group 0 (all jobs): 00:14:14.606 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=576MiB (604MB), run=10001-10001msec 00:14:14.606 00:14:14.606 Disk stats (read/write): 00:14:14.606 ublkb0: ios=0/145831, merge=0/0, ticks=0/8170, in_queue=8171, util=99.08% 00:14:14.606 00:46:04 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 [2024-11-17 00:46:04.464890] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.606 [2024-11-17 00:46:04.510412] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.606 [2024-11-17 00:46:04.511081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.606 [2024-11-17 00:46:04.520376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.606 [2024-11-17 00:46:04.520720] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:14.606 [2024-11-17 00:46:04.520781] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 [2024-11-17 00:46:04.528463] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:14.606 request: 00:14:14.606 { 00:14:14.606 "ublk_id": 0, 00:14:14.606 "method": "ublk_stop_disk", 00:14:14.606 "req_id": 1 00:14:14.606 } 00:14:14.606 Got JSON-RPC error response 00:14:14.606 response: 00:14:14.606 { 00:14:14.606 "code": -19, 00:14:14.606 "message": "No such device" 00:14:14.606 } 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.606 00:46:04 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 [2024-11-17 00:46:04.544437] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:14.606 [2024-11-17 00:46:04.546264] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:14.606 [2024-11-17 00:46:04.546293] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:14.606 ************************************ 00:14:14.606 END TEST test_create_ublk 00:14:14.606 ************************************ 00:14:14.606 00:46:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:14.606 00:14:14.606 real 0m10.807s 00:14:14.606 user 0m0.517s 00:14:14.606 sys 0m1.467s 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 00:46:04 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:14.606 00:46:04 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:14.606 00:46:04 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.606 00:46:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 ************************************ 00:14:14.606 START TEST test_create_multi_ublk 00:14:14.606 ************************************ 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 [2024-11-17 00:46:04.747374] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:14.606 [2024-11-17 00:46:04.748237] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.606 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 [2024-11-17 00:46:04.819480] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:14.606 [2024-11-17 00:46:04.819770] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:14.606 [2024-11-17 00:46:04.819783] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:14.606 [2024-11-17 00:46:04.819788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.606 [2024-11-17 00:46:04.843384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.607 [2024-11-17 00:46:04.843400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.607 [2024-11-17 00:46:04.855381] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.607 [2024-11-17 00:46:04.855850] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:14.607 [2024-11-17 00:46:04.895380] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.607 00:46:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 [2024-11-17 00:46:04.979463] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:14.607 [2024-11-17 00:46:04.979751] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:14.607 [2024-11-17 00:46:04.979763] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:14.607 [2024-11-17 00:46:04.979769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.607 [2024-11-17 00:46:04.991391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.607 [2024-11-17 00:46:04.991410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.607 [2024-11-17 00:46:05.003378] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.607 [2024-11-17 00:46:05.003859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:14.607 [2024-11-17 00:46:05.039378] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 [2024-11-17 00:46:05.123478] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:14.607 [2024-11-17 00:46:05.123767] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:14.607 [2024-11-17 00:46:05.123775] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:14.607 [2024-11-17 00:46:05.123780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.607 [2024-11-17 00:46:05.135394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.607 [2024-11-17 00:46:05.135410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.607 [2024-11-17 00:46:05.147377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.607 [2024-11-17 00:46:05.147850] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:14.607 [2024-11-17 00:46:05.183393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 [2024-11-17 00:46:05.267471] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:14.607 [2024-11-17 00:46:05.267766] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:14.607 [2024-11-17 00:46:05.267778] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:14.607 [2024-11-17 00:46:05.267784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.607 [2024-11-17 00:46:05.279392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.607 [2024-11-17 00:46:05.279412] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.607 [2024-11-17 00:46:05.291399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.607 [2024-11-17 00:46:05.291878] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:14.607 [2024-11-17 00:46:05.315385] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:14.607 { 00:14:14.607 "ublk_device": "/dev/ublkb0", 00:14:14.607 "id": 0, 00:14:14.607 "queue_depth": 512, 00:14:14.607 "num_queues": 4, 00:14:14.607 "bdev_name": "Malloc0" 00:14:14.607 }, 00:14:14.607 { 00:14:14.607 "ublk_device": "/dev/ublkb1", 00:14:14.607 "id": 1, 00:14:14.607 "queue_depth": 512, 00:14:14.607 "num_queues": 4, 00:14:14.607 "bdev_name": "Malloc1" 00:14:14.607 }, 00:14:14.607 { 00:14:14.607 "ublk_device": "/dev/ublkb2", 00:14:14.607 "id": 2, 00:14:14.607 "queue_depth": 512, 00:14:14.607 "num_queues": 4, 00:14:14.607 "bdev_name": "Malloc2" 00:14:14.607 }, 00:14:14.607 { 00:14:14.607 "ublk_device": "/dev/ublkb3", 00:14:14.607 "id": 3, 00:14:14.607 "queue_depth": 512, 00:14:14.607 "num_queues": 4, 00:14:14.607 "bdev_name": "Malloc3" 00:14:14.607 } 00:14:14.607 ]' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:14.607 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 [2024-11-17 00:46:05.979431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.608 [2024-11-17 00:46:06.026415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.608 [2024-11-17 00:46:06.027265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.608 [2024-11-17 00:46:06.035374] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.608 [2024-11-17 00:46:06.035632] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:14.608 [2024-11-17 00:46:06.035645] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 [2024-11-17 00:46:06.043439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.608 [2024-11-17 00:46:06.079885] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.608 [2024-11-17 00:46:06.080959] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.608 [2024-11-17 00:46:06.090376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.608 [2024-11-17 00:46:06.090603] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:14.608 [2024-11-17 00:46:06.090614] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 [2024-11-17 00:46:06.106460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.608 [2024-11-17 00:46:06.142413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.608 [2024-11-17 00:46:06.143161] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.608 [2024-11-17 00:46:06.150385] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.608 [2024-11-17 00:46:06.150617] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:14.608 [2024-11-17 00:46:06.150627] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 [2024-11-17 00:46:06.166433] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.608 [2024-11-17 00:46:06.205875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.608 [2024-11-17 00:46:06.206752] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.608 [2024-11-17 00:46:06.213388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.608 [2024-11-17 00:46:06.213642] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:14.608 [2024-11-17 00:46:06.213652] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:14.608 [2024-11-17 00:46:06.413436] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:14.608 [2024-11-17 00:46:06.415156] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:14.608 [2024-11-17 00:46:06.415185] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.608 00:46:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:14.867 ************************************ 00:14:14.867 END TEST test_create_multi_ublk 00:14:14.867 ************************************ 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:14.867 00:14:14.867 real 0m2.024s 00:14:14.867 user 0m0.803s 00:14:14.867 sys 0m0.147s 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.867 00:46:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.867 00:46:06 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:14.867 00:46:06 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:14.867 00:46:06 ublk -- ublk/ublk.sh@130 -- # killprocess 82971 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@950 -- # '[' -z 82971 ']' 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@954 -- # kill -0 82971 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@955 -- # uname 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82971 00:14:14.867 killing process with pid 82971 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82971' 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@969 -- # kill 82971 00:14:14.867 00:46:06 ublk -- common/autotest_common.sh@974 -- # wait 82971 00:14:15.127 [2024-11-17 00:46:06.980206] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:15.127 [2024-11-17 00:46:06.980263] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.386 ************************************ 00:14:15.386 END TEST ublk 00:14:15.386 ************************************ 00:14:15.386 00:14:15.386 real 0m18.647s 00:14:15.386 user 0m27.788s 00:14:15.386 sys 0m8.573s 00:14:15.386 00:46:07 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.386 00:46:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.386 00:46:07 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:15.386 00:46:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:15.386 00:46:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.386 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.386 ************************************ 00:14:15.386 START TEST ublk_recovery 00:14:15.386 ************************************ 00:14:15.386 00:46:07 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:15.386 * Looking for test storage... 00:14:15.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:15.386 00:46:07 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:15.386 00:46:07 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:15.386 00:46:07 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:15.386 00:46:07 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.386 00:46:07 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.645 00:46:07 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.646 --rc genhtml_branch_coverage=1 00:14:15.646 --rc genhtml_function_coverage=1 00:14:15.646 --rc genhtml_legend=1 00:14:15.646 --rc geninfo_all_blocks=1 00:14:15.646 --rc geninfo_unexecuted_blocks=1 00:14:15.646 00:14:15.646 ' 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.646 --rc genhtml_branch_coverage=1 00:14:15.646 --rc genhtml_function_coverage=1 00:14:15.646 --rc genhtml_legend=1 00:14:15.646 --rc geninfo_all_blocks=1 00:14:15.646 --rc geninfo_unexecuted_blocks=1 00:14:15.646 00:14:15.646 ' 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.646 --rc genhtml_branch_coverage=1 00:14:15.646 --rc genhtml_function_coverage=1 00:14:15.646 --rc genhtml_legend=1 00:14:15.646 --rc geninfo_all_blocks=1 00:14:15.646 --rc geninfo_unexecuted_blocks=1 00:14:15.646 00:14:15.646 ' 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.646 --rc genhtml_branch_coverage=1 00:14:15.646 --rc genhtml_function_coverage=1 00:14:15.646 --rc genhtml_legend=1 00:14:15.646 --rc geninfo_all_blocks=1 00:14:15.646 --rc geninfo_unexecuted_blocks=1 00:14:15.646 00:14:15.646 ' 00:14:15.646 00:46:07 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:15.646 00:46:07 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:15.646 00:46:07 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:15.646 00:46:07 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83337 00:14:15.646 00:46:07 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:15.646 00:46:07 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83337 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83337 ']' 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.646 00:46:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.646 00:46:07 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:15.646 [2024-11-17 00:46:07.539051] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:15.646 [2024-11-17 00:46:07.539170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83337 ] 00:14:15.646 [2024-11-17 00:46:07.687327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.904 [2024-11-17 00:46:07.716842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.904 [2024-11-17 00:46:07.716883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:16.471 00:46:08 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.471 [2024-11-17 00:46:08.323371] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:16.471 [2024-11-17 00:46:08.324284] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.471 00:46:08 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.471 malloc0 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.471 00:46:08 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.471 [2024-11-17 00:46:08.355467] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:16.471 [2024-11-17 00:46:08.355562] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:16.471 [2024-11-17 00:46:08.355568] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:16.471 [2024-11-17 00:46:08.355575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.471 [2024-11-17 00:46:08.364448] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.471 [2024-11-17 00:46:08.364474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.471 [2024-11-17 00:46:08.371383] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.471 [2024-11-17 00:46:08.371491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:16.471 [2024-11-17 00:46:08.381396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.471 1 00:14:16.471 00:46:08 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.471 00:46:08 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:17.405 00:46:09 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83370 00:14:17.405 00:46:09 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:17.405 00:46:09 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:17.663 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.663 fio-3.35 00:14:17.663 Starting 1 process 00:14:22.935 00:46:14 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83337 00:14:22.935 00:46:14 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:28.254 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83337 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:28.254 00:46:19 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83481 00:14:28.254 00:46:19 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:28.254 00:46:19 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83481 00:14:28.254 00:46:19 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:28.254 00:46:19 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83481 ']' 00:14:28.254 00:46:19 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.254 00:46:19 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.254 00:46:19 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.254 00:46:19 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.254 00:46:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.254 [2024-11-17 00:46:19.491089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:28.254 [2024-11-17 00:46:19.491787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83481 ] 00:14:28.254 [2024-11-17 00:46:19.641993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.254 [2024-11-17 00:46:19.692967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.254 [2024-11-17 00:46:19.693036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:28.516 00:46:20 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.516 [2024-11-17 00:46:20.354381] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:28.516 [2024-11-17 00:46:20.356157] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.516 00:46:20 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.516 malloc0 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.516 00:46:20 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.516 [2024-11-17 00:46:20.402554] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:28.516 [2024-11-17 00:46:20.402616] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:28.516 [2024-11-17 00:46:20.402640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:28.516 [2024-11-17 00:46:20.410461] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:28.516 [2024-11-17 00:46:20.410489] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:28.516 1 00:14:28.516 00:46:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.516 00:46:20 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83370 00:14:29.455 [2024-11-17 00:46:21.410525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:29.455 [2024-11-17 00:46:21.418378] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:29.455 [2024-11-17 00:46:21.418396] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:30.389 [2024-11-17 00:46:22.418417] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:30.389 [2024-11-17 00:46:22.422386] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:30.389 [2024-11-17 00:46:22.422399] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:31.764 [2024-11-17 00:46:23.422426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:31.764 [2024-11-17 00:46:23.429382] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:31.764 [2024-11-17 00:46:23.429400] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:31.764 [2024-11-17 00:46:23.429407] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:31.764 [2024-11-17 00:46:23.429484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:53.689 [2024-11-17 00:46:44.457377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:53.689 [2024-11-17 00:46:44.464001] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:53.689 [2024-11-17 00:46:44.471576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:53.689 [2024-11-17 00:46:44.471597] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:20.223 00:15:20.223 fio_test: (groupid=0, jobs=1): err= 0: pid=83373: Sun Nov 17 00:47:09 2024 00:15:20.223 read: IOPS=14.8k, BW=57.6MiB/s (60.4MB/s)(3458MiB/60002msec) 00:15:20.223 slat (nsec): min=1126, max=121396, avg=4878.46, stdev=1166.83 00:15:20.223 clat (usec): min=674, max=30086k, avg=4296.20, stdev=255778.59 00:15:20.223 lat (usec): min=679, max=30086k, avg=4301.07, stdev=255778.59 00:15:20.223 clat percentiles (usec): 00:15:20.223 | 1.00th=[ 1729], 5.00th=[ 1860], 10.00th=[ 1876], 20.00th=[ 1909], 00:15:20.223 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1958], 00:15:20.223 | 70.00th=[ 1975], 80.00th=[ 1991], 90.00th=[ 2040], 95.00th=[ 3392], 00:15:20.223 | 99.00th=[ 5604], 99.50th=[ 5932], 99.90th=[ 9765], 99.95th=[13042], 00:15:20.223 | 99.99th=[13566] 00:15:20.223 bw ( KiB/s): min=17632, max=126112, per=100.00%, avg=116162.00, stdev=21085.35, samples=60 00:15:20.223 iops : min= 4408, max=31528, avg=29040.50, stdev=5271.34, samples=60 00:15:20.223 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(3452MiB/60002msec); 0 zone resets 00:15:20.223 slat (nsec): min=1158, max=121339, avg=4907.62, stdev=1175.21 00:15:20.223 clat (usec): min=680, max=30086k, avg=4375.95, stdev=255981.31 00:15:20.223 lat (usec): min=685, max=30086k, avg=4380.86, stdev=255981.31 00:15:20.223 clat percentiles (usec): 00:15:20.223 | 1.00th=[ 1778], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 1991], 00:15:20.223 | 30.00th=[ 2008], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2057], 00:15:20.223 | 70.00th=[ 2073], 80.00th=[ 2089], 90.00th=[ 2147], 95.00th=[ 3261], 00:15:20.223 | 99.00th=[ 5669], 99.50th=[ 5997], 99.90th=[ 8848], 99.95th=[13042], 00:15:20.223 | 99.99th=[13829] 00:15:20.223 bw ( KiB/s): min=18016, max=126160, per=100.00%, avg=115988.53, stdev=20880.73, samples=60 00:15:20.223 iops : min= 4504, max=31540, avg=28997.13, stdev=5220.18, samples=60 00:15:20.223 lat (usec) : 750=0.01%, 1000=0.01% 00:15:20.223 lat (msec) : 2=51.79%, 4=44.25%, 10=3.86%, 20=0.09%, >=2000=0.01% 00:15:20.223 cpu : usr=3.34%, sys=14.85%, ctx=58397, majf=0, minf=13 00:15:20.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:20.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:20.223 issued rwts: total=885224,883829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:20.223 00:15:20.223 Run status group 0 (all jobs): 00:15:20.223 READ: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=3458MiB (3626MB), run=60002-60002msec 00:15:20.223 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=3452MiB (3620MB), run=60002-60002msec 00:15:20.223 00:15:20.223 Disk stats (read/write): 00:15:20.223 ublkb1: ios=881820/880511, merge=0/0, ticks=3750886/3743446, in_queue=7494333, util=99.91% 00:15:20.223 00:47:09 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.223 [2024-11-17 00:47:09.648278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.223 [2024-11-17 00:47:09.695396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:20.223 [2024-11-17 00:47:09.695568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:20.223 [2024-11-17 00:47:09.706391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:20.223 [2024-11-17 00:47:09.706477] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:20.223 [2024-11-17 00:47:09.706487] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.223 00:47:09 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.223 [2024-11-17 00:47:09.721425] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:20.223 [2024-11-17 00:47:09.722654] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:20.223 [2024-11-17 00:47:09.722692] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.223 00:47:09 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:20.223 00:47:09 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:20.223 00:47:09 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83481 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83481 ']' 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83481 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:20.223 00:47:09 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.224 00:47:09 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83481 00:15:20.224 00:47:09 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.224 00:47:09 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.224 killing process with pid 83481 00:15:20.224 00:47:09 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83481' 00:15:20.224 00:47:09 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83481 00:15:20.224 00:47:09 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83481 00:15:20.224 [2024-11-17 00:47:09.927797] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:20.224 [2024-11-17 00:47:09.927852] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:20.224 00:15:20.224 real 1m2.924s 00:15:20.224 user 1m44.060s 00:15:20.224 sys 0m22.136s 00:15:20.224 ************************************ 00:15:20.224 END TEST ublk_recovery 00:15:20.224 ************************************ 00:15:20.224 00:47:10 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.224 00:47:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.224 00:47:10 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:20.224 00:47:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:20.224 00:47:10 -- common/autotest_common.sh@10 -- # set +x 00:15:20.224 00:47:10 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:20.224 00:47:10 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:20.224 00:47:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:20.224 00:47:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.224 00:47:10 -- common/autotest_common.sh@10 -- # set +x 00:15:20.224 ************************************ 00:15:20.224 START TEST ftl 00:15:20.224 ************************************ 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:20.224 * Looking for test storage... 00:15:20.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.224 00:47:10 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.224 00:47:10 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.224 00:47:10 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.224 00:47:10 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.224 00:47:10 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.224 00:47:10 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.224 00:47:10 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.224 00:47:10 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:20.224 00:47:10 ftl -- scripts/common.sh@345 -- # : 1 00:15:20.224 00:47:10 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.224 00:47:10 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.224 00:47:10 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:20.224 00:47:10 ftl -- scripts/common.sh@353 -- # local d=1 00:15:20.224 00:47:10 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.224 00:47:10 ftl -- scripts/common.sh@355 -- # echo 1 00:15:20.224 00:47:10 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.224 00:47:10 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@353 -- # local d=2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.224 00:47:10 ftl -- scripts/common.sh@355 -- # echo 2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.224 00:47:10 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.224 00:47:10 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.224 00:47:10 ftl -- scripts/common.sh@368 -- # return 0 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:20.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.224 --rc genhtml_branch_coverage=1 00:15:20.224 --rc genhtml_function_coverage=1 00:15:20.224 --rc genhtml_legend=1 00:15:20.224 --rc geninfo_all_blocks=1 00:15:20.224 --rc geninfo_unexecuted_blocks=1 00:15:20.224 00:15:20.224 ' 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:20.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.224 --rc genhtml_branch_coverage=1 00:15:20.224 --rc genhtml_function_coverage=1 00:15:20.224 --rc genhtml_legend=1 00:15:20.224 --rc geninfo_all_blocks=1 00:15:20.224 --rc geninfo_unexecuted_blocks=1 00:15:20.224 00:15:20.224 ' 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:20.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.224 --rc genhtml_branch_coverage=1 00:15:20.224 --rc genhtml_function_coverage=1 00:15:20.224 --rc genhtml_legend=1 00:15:20.224 --rc geninfo_all_blocks=1 00:15:20.224 --rc geninfo_unexecuted_blocks=1 00:15:20.224 00:15:20.224 ' 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:20.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.224 --rc genhtml_branch_coverage=1 00:15:20.224 --rc genhtml_function_coverage=1 00:15:20.224 --rc genhtml_legend=1 00:15:20.224 --rc geninfo_all_blocks=1 00:15:20.224 --rc geninfo_unexecuted_blocks=1 00:15:20.224 00:15:20.224 ' 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:20.224 00:47:10 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:20.224 00:47:10 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.224 00:47:10 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.224 00:47:10 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:20.224 00:47:10 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:20.224 00:47:10 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.224 00:47:10 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:20.224 00:47:10 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:20.224 00:47:10 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.224 00:47:10 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.224 00:47:10 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:20.224 00:47:10 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:20.224 00:47:10 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:20.224 00:47:10 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:20.224 00:47:10 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:20.224 00:47:10 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:20.224 00:47:10 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.224 00:47:10 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.224 00:47:10 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:20.224 00:47:10 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:20.224 00:47:10 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:20.224 00:47:10 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:20.224 00:47:10 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:20.224 00:47:10 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:20.224 00:47:10 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:20.224 00:47:10 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:20.224 00:47:10 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.224 00:47:10 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:20.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.224 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.224 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.224 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.224 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84274 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84274 00:15:20.224 00:47:10 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@831 -- # '[' -z 84274 ']' 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.224 00:47:10 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.225 00:47:10 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.225 00:47:10 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.225 00:47:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:20.225 [2024-11-17 00:47:11.056243] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:20.225 [2024-11-17 00:47:11.056388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84274 ] 00:15:20.225 [2024-11-17 00:47:11.204470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.225 [2024-11-17 00:47:11.257236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.225 00:47:11 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.225 00:47:11 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:20.225 00:47:11 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:20.225 00:47:12 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:20.485 00:47:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:20.485 00:47:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:21.056 00:47:12 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:21.056 00:47:12 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:21.057 00:47:12 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@50 -- # break 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:21.315 00:47:13 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:21.573 00:47:13 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:21.573 00:47:13 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:21.573 00:47:13 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:21.573 00:47:13 ftl -- ftl/ftl.sh@63 -- # break 00:15:21.573 00:47:13 ftl -- ftl/ftl.sh@66 -- # killprocess 84274 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@950 -- # '[' -z 84274 ']' 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@954 -- # kill -0 84274 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@955 -- # uname 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84274 00:15:21.573 killing process with pid 84274 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84274' 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@969 -- # kill 84274 00:15:21.573 00:47:13 ftl -- common/autotest_common.sh@974 -- # wait 84274 00:15:21.835 00:47:13 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:21.835 00:47:13 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:21.835 00:47:13 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:21.835 00:47:13 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.835 00:47:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:21.835 ************************************ 00:15:21.835 START TEST ftl_fio_basic 00:15:21.835 ************************************ 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:21.835 * Looking for test storage... 00:15:21.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.835 --rc genhtml_branch_coverage=1 00:15:21.835 --rc genhtml_function_coverage=1 00:15:21.835 --rc genhtml_legend=1 00:15:21.835 --rc geninfo_all_blocks=1 00:15:21.835 --rc geninfo_unexecuted_blocks=1 00:15:21.835 00:15:21.835 ' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.835 --rc genhtml_branch_coverage=1 00:15:21.835 --rc genhtml_function_coverage=1 00:15:21.835 --rc genhtml_legend=1 00:15:21.835 --rc geninfo_all_blocks=1 00:15:21.835 --rc geninfo_unexecuted_blocks=1 00:15:21.835 00:15:21.835 ' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.835 --rc genhtml_branch_coverage=1 00:15:21.835 --rc genhtml_function_coverage=1 00:15:21.835 --rc genhtml_legend=1 00:15:21.835 --rc geninfo_all_blocks=1 00:15:21.835 --rc geninfo_unexecuted_blocks=1 00:15:21.835 00:15:21.835 ' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.835 --rc genhtml_branch_coverage=1 00:15:21.835 --rc genhtml_function_coverage=1 00:15:21.835 --rc genhtml_legend=1 00:15:21.835 --rc geninfo_all_blocks=1 00:15:21.835 --rc geninfo_unexecuted_blocks=1 00:15:21.835 00:15:21.835 ' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:21.835 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84392 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84392 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84392 ']' 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.836 00:47:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:22.095 [2024-11-17 00:47:13.961603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:22.095 [2024-11-17 00:47:13.962017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84392 ] 00:15:22.095 [2024-11-17 00:47:14.109344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.095 [2024-11-17 00:47:14.140774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.095 [2024-11-17 00:47:14.140995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.095 [2024-11-17 00:47:14.141049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:23.028 00:47:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:23.286 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:23.286 { 00:15:23.286 "name": "nvme0n1", 00:15:23.286 "aliases": [ 00:15:23.286 "b120a7b3-77da-459e-95c0-90f60a02f4fd" 00:15:23.286 ], 00:15:23.286 "product_name": "NVMe disk", 00:15:23.286 "block_size": 4096, 00:15:23.286 "num_blocks": 1310720, 00:15:23.286 "uuid": "b120a7b3-77da-459e-95c0-90f60a02f4fd", 00:15:23.286 "numa_id": -1, 00:15:23.286 "assigned_rate_limits": { 00:15:23.286 "rw_ios_per_sec": 0, 00:15:23.286 "rw_mbytes_per_sec": 0, 00:15:23.286 "r_mbytes_per_sec": 0, 00:15:23.286 "w_mbytes_per_sec": 0 00:15:23.286 }, 00:15:23.286 "claimed": false, 00:15:23.286 "zoned": false, 00:15:23.286 "supported_io_types": { 00:15:23.286 "read": true, 00:15:23.286 "write": true, 00:15:23.286 "unmap": true, 00:15:23.286 "flush": true, 00:15:23.286 "reset": true, 00:15:23.286 "nvme_admin": true, 00:15:23.286 "nvme_io": true, 00:15:23.286 "nvme_io_md": false, 00:15:23.286 "write_zeroes": true, 00:15:23.286 "zcopy": false, 00:15:23.286 "get_zone_info": false, 00:15:23.286 "zone_management": false, 00:15:23.286 "zone_append": false, 00:15:23.286 "compare": true, 00:15:23.286 "compare_and_write": false, 00:15:23.286 "abort": true, 00:15:23.286 "seek_hole": false, 00:15:23.286 "seek_data": false, 00:15:23.286 "copy": true, 00:15:23.286 "nvme_iov_md": false 00:15:23.286 }, 00:15:23.286 "driver_specific": { 00:15:23.286 "nvme": [ 00:15:23.286 { 00:15:23.286 "pci_address": "0000:00:11.0", 00:15:23.286 "trid": { 00:15:23.286 "trtype": "PCIe", 00:15:23.286 "traddr": "0000:00:11.0" 00:15:23.286 }, 00:15:23.286 "ctrlr_data": { 00:15:23.286 "cntlid": 0, 00:15:23.286 "vendor_id": "0x1b36", 00:15:23.286 "model_number": "QEMU NVMe Ctrl", 00:15:23.286 "serial_number": "12341", 00:15:23.286 "firmware_revision": "8.0.0", 00:15:23.286 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:23.286 "oacs": { 00:15:23.286 "security": 0, 00:15:23.286 "format": 1, 00:15:23.286 "firmware": 0, 00:15:23.286 "ns_manage": 1 00:15:23.286 }, 00:15:23.286 "multi_ctrlr": false, 00:15:23.286 "ana_reporting": false 00:15:23.286 }, 00:15:23.286 "vs": { 00:15:23.286 "nvme_version": "1.4" 00:15:23.286 }, 00:15:23.286 "ns_data": { 00:15:23.287 "id": 1, 00:15:23.287 "can_share": false 00:15:23.287 } 00:15:23.287 } 00:15:23.287 ], 00:15:23.287 "mp_policy": "active_passive" 00:15:23.287 } 00:15:23.287 } 00:15:23.287 ]' 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:23.287 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:23.544 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:23.544 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=56bd1c26-5d53-4efc-8071-122538257bbc 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 56bd1c26-5d53-4efc-8071-122538257bbc 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:23.802 00:47:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:24.059 { 00:15:24.059 "name": "bd58257d-aa66-407f-a7a9-b1089ff8b9ff", 00:15:24.059 "aliases": [ 00:15:24.059 "lvs/nvme0n1p0" 00:15:24.059 ], 00:15:24.059 "product_name": "Logical Volume", 00:15:24.059 "block_size": 4096, 00:15:24.059 "num_blocks": 26476544, 00:15:24.059 "uuid": "bd58257d-aa66-407f-a7a9-b1089ff8b9ff", 00:15:24.059 "assigned_rate_limits": { 00:15:24.059 "rw_ios_per_sec": 0, 00:15:24.059 "rw_mbytes_per_sec": 0, 00:15:24.059 "r_mbytes_per_sec": 0, 00:15:24.059 "w_mbytes_per_sec": 0 00:15:24.059 }, 00:15:24.059 "claimed": false, 00:15:24.059 "zoned": false, 00:15:24.059 "supported_io_types": { 00:15:24.059 "read": true, 00:15:24.059 "write": true, 00:15:24.059 "unmap": true, 00:15:24.059 "flush": false, 00:15:24.059 "reset": true, 00:15:24.059 "nvme_admin": false, 00:15:24.059 "nvme_io": false, 00:15:24.059 "nvme_io_md": false, 00:15:24.059 "write_zeroes": true, 00:15:24.059 "zcopy": false, 00:15:24.059 "get_zone_info": false, 00:15:24.059 "zone_management": false, 00:15:24.059 "zone_append": false, 00:15:24.059 "compare": false, 00:15:24.059 "compare_and_write": false, 00:15:24.059 "abort": false, 00:15:24.059 "seek_hole": true, 00:15:24.059 "seek_data": true, 00:15:24.059 "copy": false, 00:15:24.059 "nvme_iov_md": false 00:15:24.059 }, 00:15:24.059 "driver_specific": { 00:15:24.059 "lvol": { 00:15:24.059 "lvol_store_uuid": "56bd1c26-5d53-4efc-8071-122538257bbc", 00:15:24.059 "base_bdev": "nvme0n1", 00:15:24.059 "thin_provision": true, 00:15:24.059 "num_allocated_clusters": 0, 00:15:24.059 "snapshot": false, 00:15:24.059 "clone": false, 00:15:24.059 "esnap_clone": false 00:15:24.059 } 00:15:24.059 } 00:15:24.059 } 00:15:24.059 ]' 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:24.059 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:24.317 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:24.574 { 00:15:24.574 "name": "bd58257d-aa66-407f-a7a9-b1089ff8b9ff", 00:15:24.574 "aliases": [ 00:15:24.574 "lvs/nvme0n1p0" 00:15:24.574 ], 00:15:24.574 "product_name": "Logical Volume", 00:15:24.574 "block_size": 4096, 00:15:24.574 "num_blocks": 26476544, 00:15:24.574 "uuid": "bd58257d-aa66-407f-a7a9-b1089ff8b9ff", 00:15:24.574 "assigned_rate_limits": { 00:15:24.574 "rw_ios_per_sec": 0, 00:15:24.574 "rw_mbytes_per_sec": 0, 00:15:24.574 "r_mbytes_per_sec": 0, 00:15:24.574 "w_mbytes_per_sec": 0 00:15:24.574 }, 00:15:24.574 "claimed": false, 00:15:24.574 "zoned": false, 00:15:24.574 "supported_io_types": { 00:15:24.574 "read": true, 00:15:24.574 "write": true, 00:15:24.574 "unmap": true, 00:15:24.574 "flush": false, 00:15:24.574 "reset": true, 00:15:24.574 "nvme_admin": false, 00:15:24.574 "nvme_io": false, 00:15:24.574 "nvme_io_md": false, 00:15:24.574 "write_zeroes": true, 00:15:24.574 "zcopy": false, 00:15:24.574 "get_zone_info": false, 00:15:24.574 "zone_management": false, 00:15:24.574 "zone_append": false, 00:15:24.574 "compare": false, 00:15:24.574 "compare_and_write": false, 00:15:24.574 "abort": false, 00:15:24.574 "seek_hole": true, 00:15:24.574 "seek_data": true, 00:15:24.574 "copy": false, 00:15:24.574 "nvme_iov_md": false 00:15:24.574 }, 00:15:24.574 "driver_specific": { 00:15:24.574 "lvol": { 00:15:24.574 "lvol_store_uuid": "56bd1c26-5d53-4efc-8071-122538257bbc", 00:15:24.574 "base_bdev": "nvme0n1", 00:15:24.574 "thin_provision": true, 00:15:24.574 "num_allocated_clusters": 0, 00:15:24.574 "snapshot": false, 00:15:24.574 "clone": false, 00:15:24.574 "esnap_clone": false 00:15:24.574 } 00:15:24.574 } 00:15:24.574 } 00:15:24.574 ]' 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:24.574 00:47:16 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:24.832 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:24.832 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bd58257d-aa66-407f-a7a9-b1089ff8b9ff 00:15:25.090 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:25.090 { 00:15:25.090 "name": "bd58257d-aa66-407f-a7a9-b1089ff8b9ff", 00:15:25.090 "aliases": [ 00:15:25.090 "lvs/nvme0n1p0" 00:15:25.090 ], 00:15:25.090 "product_name": "Logical Volume", 00:15:25.090 "block_size": 4096, 00:15:25.090 "num_blocks": 26476544, 00:15:25.090 "uuid": "bd58257d-aa66-407f-a7a9-b1089ff8b9ff", 00:15:25.090 "assigned_rate_limits": { 00:15:25.090 "rw_ios_per_sec": 0, 00:15:25.090 "rw_mbytes_per_sec": 0, 00:15:25.090 "r_mbytes_per_sec": 0, 00:15:25.090 "w_mbytes_per_sec": 0 00:15:25.090 }, 00:15:25.090 "claimed": false, 00:15:25.090 "zoned": false, 00:15:25.090 "supported_io_types": { 00:15:25.090 "read": true, 00:15:25.090 "write": true, 00:15:25.090 "unmap": true, 00:15:25.090 "flush": false, 00:15:25.090 "reset": true, 00:15:25.090 "nvme_admin": false, 00:15:25.090 "nvme_io": false, 00:15:25.090 "nvme_io_md": false, 00:15:25.090 "write_zeroes": true, 00:15:25.090 "zcopy": false, 00:15:25.090 "get_zone_info": false, 00:15:25.090 "zone_management": false, 00:15:25.090 "zone_append": false, 00:15:25.090 "compare": false, 00:15:25.090 "compare_and_write": false, 00:15:25.090 "abort": false, 00:15:25.090 "seek_hole": true, 00:15:25.090 "seek_data": true, 00:15:25.090 "copy": false, 00:15:25.090 "nvme_iov_md": false 00:15:25.090 }, 00:15:25.090 "driver_specific": { 00:15:25.090 "lvol": { 00:15:25.090 "lvol_store_uuid": "56bd1c26-5d53-4efc-8071-122538257bbc", 00:15:25.090 "base_bdev": "nvme0n1", 00:15:25.090 "thin_provision": true, 00:15:25.090 "num_allocated_clusters": 0, 00:15:25.090 "snapshot": false, 00:15:25.090 "clone": false, 00:15:25.090 "esnap_clone": false 00:15:25.090 } 00:15:25.090 } 00:15:25.090 } 00:15:25.090 ]' 00:15:25.090 00:47:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:25.090 00:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bd58257d-aa66-407f-a7a9-b1089ff8b9ff -c nvc0n1p0 --l2p_dram_limit 60 00:15:25.348 [2024-11-17 00:47:17.247382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.247422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:25.348 [2024-11-17 00:47:17.247441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:25.348 [2024-11-17 00:47:17.247449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.348 [2024-11-17 00:47:17.247501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.247510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:25.348 [2024-11-17 00:47:17.247520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:25.348 [2024-11-17 00:47:17.247530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.348 [2024-11-17 00:47:17.247572] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:25.348 [2024-11-17 00:47:17.247792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:25.348 [2024-11-17 00:47:17.247806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.247813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:25.348 [2024-11-17 00:47:17.247819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:15:25.348 [2024-11-17 00:47:17.247826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.348 [2024-11-17 00:47:17.247896] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6189555a-87f0-4f33-b0b0-2b9878ea5a18 00:15:25.348 [2024-11-17 00:47:17.248949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.249049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:25.348 [2024-11-17 00:47:17.249066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:15:25.348 [2024-11-17 00:47:17.249073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.348 [2024-11-17 00:47:17.254313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.254339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:25.348 [2024-11-17 00:47:17.254370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.173 ms 00:15:25.348 [2024-11-17 00:47:17.254376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.348 [2024-11-17 00:47:17.254457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.254470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:25.348 [2024-11-17 00:47:17.254479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:25.348 [2024-11-17 00:47:17.254484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.348 [2024-11-17 00:47:17.254537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.254546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:25.348 [2024-11-17 00:47:17.254554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:25.348 [2024-11-17 00:47:17.254559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.348 [2024-11-17 00:47:17.254589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:25.348 [2024-11-17 00:47:17.255908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.348 [2024-11-17 00:47:17.256002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:25.348 [2024-11-17 00:47:17.256013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:15:25.349 [2024-11-17 00:47:17.256021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.349 [2024-11-17 00:47:17.256064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.349 [2024-11-17 00:47:17.256072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:25.349 [2024-11-17 00:47:17.256077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:25.349 [2024-11-17 00:47:17.256094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.349 [2024-11-17 00:47:17.256116] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:25.349 [2024-11-17 00:47:17.256238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:25.349 [2024-11-17 00:47:17.256261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:25.349 [2024-11-17 00:47:17.256273] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:25.349 [2024-11-17 00:47:17.256281] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256295] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:25.349 [2024-11-17 00:47:17.256304] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:25.349 [2024-11-17 00:47:17.256309] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:25.349 [2024-11-17 00:47:17.256316] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:25.349 [2024-11-17 00:47:17.256322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.349 [2024-11-17 00:47:17.256328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:25.349 [2024-11-17 00:47:17.256334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:15:25.349 [2024-11-17 00:47:17.256341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.349 [2024-11-17 00:47:17.256433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.349 [2024-11-17 00:47:17.256443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:25.349 [2024-11-17 00:47:17.256449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:25.349 [2024-11-17 00:47:17.256456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.349 [2024-11-17 00:47:17.256565] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:25.349 [2024-11-17 00:47:17.256575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:25.349 [2024-11-17 00:47:17.256581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:25.349 [2024-11-17 00:47:17.256603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:25.349 [2024-11-17 00:47:17.256624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:25.349 [2024-11-17 00:47:17.256637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:25.349 [2024-11-17 00:47:17.256644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:25.349 [2024-11-17 00:47:17.256650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:25.349 [2024-11-17 00:47:17.256659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:25.349 [2024-11-17 00:47:17.256665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:25.349 [2024-11-17 00:47:17.256672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:25.349 [2024-11-17 00:47:17.256685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:25.349 [2024-11-17 00:47:17.256715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:25.349 [2024-11-17 00:47:17.256735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:25.349 [2024-11-17 00:47:17.256753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:25.349 [2024-11-17 00:47:17.256774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:25.349 [2024-11-17 00:47:17.256793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:25.349 [2024-11-17 00:47:17.256806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:25.349 [2024-11-17 00:47:17.256813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:25.349 [2024-11-17 00:47:17.256818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:25.349 [2024-11-17 00:47:17.256825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:25.349 [2024-11-17 00:47:17.256830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:25.349 [2024-11-17 00:47:17.256837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:25.349 [2024-11-17 00:47:17.256852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:25.349 [2024-11-17 00:47:17.256858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256865] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:25.349 [2024-11-17 00:47:17.256871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:25.349 [2024-11-17 00:47:17.256881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.349 [2024-11-17 00:47:17.256895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:25.349 [2024-11-17 00:47:17.256901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:25.349 [2024-11-17 00:47:17.256908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:25.349 [2024-11-17 00:47:17.256914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:25.349 [2024-11-17 00:47:17.256921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:25.349 [2024-11-17 00:47:17.256927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:25.349 [2024-11-17 00:47:17.256936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:25.349 [2024-11-17 00:47:17.256945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:25.349 [2024-11-17 00:47:17.256956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:25.349 [2024-11-17 00:47:17.256962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:25.349 [2024-11-17 00:47:17.256969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:25.349 [2024-11-17 00:47:17.256976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:25.349 [2024-11-17 00:47:17.256983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:25.349 [2024-11-17 00:47:17.256989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:25.349 [2024-11-17 00:47:17.256997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:25.349 [2024-11-17 00:47:17.257003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:25.349 [2024-11-17 00:47:17.257009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:25.349 [2024-11-17 00:47:17.257014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:25.349 [2024-11-17 00:47:17.257020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:25.349 [2024-11-17 00:47:17.257025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:25.349 [2024-11-17 00:47:17.257032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:25.349 [2024-11-17 00:47:17.257037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:25.349 [2024-11-17 00:47:17.257043] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:25.349 [2024-11-17 00:47:17.257049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:25.349 [2024-11-17 00:47:17.257056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:25.349 [2024-11-17 00:47:17.257062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:25.350 [2024-11-17 00:47:17.257070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:25.350 [2024-11-17 00:47:17.257076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:25.350 [2024-11-17 00:47:17.257083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.350 [2024-11-17 00:47:17.257088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:25.350 [2024-11-17 00:47:17.257096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:15:25.350 [2024-11-17 00:47:17.257102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.350 [2024-11-17 00:47:17.257160] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:25.350 [2024-11-17 00:47:17.257167] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:27.884 [2024-11-17 00:47:19.527019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.527202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:27.884 [2024-11-17 00:47:19.527225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2269.847 ms 00:15:27.884 [2024-11-17 00:47:19.527234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.546644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.546689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:27.884 [2024-11-17 00:47:19.546715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.267 ms 00:15:27.884 [2024-11-17 00:47:19.546732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.546858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.546868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:27.884 [2024-11-17 00:47:19.546878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:27.884 [2024-11-17 00:47:19.546885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.557603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.557650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:27.884 [2024-11-17 00:47:19.557669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.658 ms 00:15:27.884 [2024-11-17 00:47:19.557680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.557728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.557740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:27.884 [2024-11-17 00:47:19.557755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:27.884 [2024-11-17 00:47:19.557767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.558198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.558237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:27.884 [2024-11-17 00:47:19.558255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:15:27.884 [2024-11-17 00:47:19.558268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.558485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.558506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:27.884 [2024-11-17 00:47:19.558523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:15:27.884 [2024-11-17 00:47:19.558536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.564554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.564689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:27.884 [2024-11-17 00:47:19.564707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.969 ms 00:15:27.884 [2024-11-17 00:47:19.564716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.572999] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:27.884 [2024-11-17 00:47:19.587546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.587580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:27.884 [2024-11-17 00:47:19.587590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.755 ms 00:15:27.884 [2024-11-17 00:47:19.587609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.624982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.625023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:27.884 [2024-11-17 00:47:19.625037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.333 ms 00:15:27.884 [2024-11-17 00:47:19.625049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.625227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.625238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:27.884 [2024-11-17 00:47:19.625249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:15:27.884 [2024-11-17 00:47:19.625258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.628051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.884 [2024-11-17 00:47:19.628086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:27.884 [2024-11-17 00:47:19.628096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:15:27.884 [2024-11-17 00:47:19.628118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.884 [2024-11-17 00:47:19.630406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.630538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:27.885 [2024-11-17 00:47:19.630554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.242 ms 00:15:27.885 [2024-11-17 00:47:19.630563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.630871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.630888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:27.885 [2024-11-17 00:47:19.630897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:15:27.885 [2024-11-17 00:47:19.630907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.652788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.652823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:27.885 [2024-11-17 00:47:19.652835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.852 ms 00:15:27.885 [2024-11-17 00:47:19.652844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.656436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.656471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:27.885 [2024-11-17 00:47:19.656481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:15:27.885 [2024-11-17 00:47:19.656492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.659124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.659158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:27.885 [2024-11-17 00:47:19.659166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:15:27.885 [2024-11-17 00:47:19.659175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.662328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.662479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:27.885 [2024-11-17 00:47:19.662495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.106 ms 00:15:27.885 [2024-11-17 00:47:19.662507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.662569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.662594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:27.885 [2024-11-17 00:47:19.662603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:27.885 [2024-11-17 00:47:19.662613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.662687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.885 [2024-11-17 00:47:19.662698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:27.885 [2024-11-17 00:47:19.662707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:27.885 [2024-11-17 00:47:19.662719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.885 [2024-11-17 00:47:19.663630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2415.849 ms, result 0 00:15:27.885 { 00:15:27.885 "name": "ftl0", 00:15:27.885 "uuid": "6189555a-87f0-4f33-b0b0-2b9878ea5a18" 00:15:27.885 } 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:27.885 00:47:19 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:28.143 [ 00:15:28.143 { 00:15:28.143 "name": "ftl0", 00:15:28.143 "aliases": [ 00:15:28.143 "6189555a-87f0-4f33-b0b0-2b9878ea5a18" 00:15:28.143 ], 00:15:28.143 "product_name": "FTL disk", 00:15:28.143 "block_size": 4096, 00:15:28.143 "num_blocks": 20971520, 00:15:28.143 "uuid": "6189555a-87f0-4f33-b0b0-2b9878ea5a18", 00:15:28.143 "assigned_rate_limits": { 00:15:28.143 "rw_ios_per_sec": 0, 00:15:28.143 "rw_mbytes_per_sec": 0, 00:15:28.143 "r_mbytes_per_sec": 0, 00:15:28.143 "w_mbytes_per_sec": 0 00:15:28.143 }, 00:15:28.143 "claimed": false, 00:15:28.143 "zoned": false, 00:15:28.143 "supported_io_types": { 00:15:28.143 "read": true, 00:15:28.143 "write": true, 00:15:28.143 "unmap": true, 00:15:28.143 "flush": true, 00:15:28.143 "reset": false, 00:15:28.143 "nvme_admin": false, 00:15:28.143 "nvme_io": false, 00:15:28.143 "nvme_io_md": false, 00:15:28.143 "write_zeroes": true, 00:15:28.143 "zcopy": false, 00:15:28.143 "get_zone_info": false, 00:15:28.143 "zone_management": false, 00:15:28.143 "zone_append": false, 00:15:28.143 "compare": false, 00:15:28.143 "compare_and_write": false, 00:15:28.143 "abort": false, 00:15:28.143 "seek_hole": false, 00:15:28.143 "seek_data": false, 00:15:28.143 "copy": false, 00:15:28.143 "nvme_iov_md": false 00:15:28.143 }, 00:15:28.143 "driver_specific": { 00:15:28.143 "ftl": { 00:15:28.143 "base_bdev": "bd58257d-aa66-407f-a7a9-b1089ff8b9ff", 00:15:28.143 "cache": "nvc0n1p0" 00:15:28.143 } 00:15:28.143 } 00:15:28.143 } 00:15:28.143 ] 00:15:28.143 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:28.143 00:47:20 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:28.143 00:47:20 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:28.401 00:47:20 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:28.401 00:47:20 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:28.401 [2024-11-17 00:47:20.456098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.401 [2024-11-17 00:47:20.456140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:28.401 [2024-11-17 00:47:20.456155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:28.401 [2024-11-17 00:47:20.456163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.401 [2024-11-17 00:47:20.456201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:28.401 [2024-11-17 00:47:20.456703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.401 [2024-11-17 00:47:20.456731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:28.401 [2024-11-17 00:47:20.456740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:15:28.401 [2024-11-17 00:47:20.456751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.401 [2024-11-17 00:47:20.457239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.401 [2024-11-17 00:47:20.457258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:28.401 [2024-11-17 00:47:20.457267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:15:28.401 [2024-11-17 00:47:20.457277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.401 [2024-11-17 00:47:20.460537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.401 [2024-11-17 00:47:20.460589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:28.401 [2024-11-17 00:47:20.460597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:15:28.401 [2024-11-17 00:47:20.460606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.466276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.466403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:28.660 [2024-11-17 00:47:20.466416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.640 ms 00:15:28.660 [2024-11-17 00:47:20.466424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.467729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.467759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:28.660 [2024-11-17 00:47:20.467766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:15:28.660 [2024-11-17 00:47:20.467773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.471102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.471135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:28.660 [2024-11-17 00:47:20.471143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.295 ms 00:15:28.660 [2024-11-17 00:47:20.471151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.471307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.471317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:28.660 [2024-11-17 00:47:20.471323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:15:28.660 [2024-11-17 00:47:20.471330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.472711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.472740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:28.660 [2024-11-17 00:47:20.472747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:15:28.660 [2024-11-17 00:47:20.472754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.473545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.473644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:28.660 [2024-11-17 00:47:20.473655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:15:28.660 [2024-11-17 00:47:20.473662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.474290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.474315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:28.660 [2024-11-17 00:47:20.474322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:15:28.660 [2024-11-17 00:47:20.474328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.475063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.660 [2024-11-17 00:47:20.475092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:28.660 [2024-11-17 00:47:20.475099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:15:28.660 [2024-11-17 00:47:20.475105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.660 [2024-11-17 00:47:20.475136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:28.660 [2024-11-17 00:47:20.475149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:28.660 [2024-11-17 00:47:20.475207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:28.661 [2024-11-17 00:47:20.475768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:28.662 [2024-11-17 00:47:20.475843] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:28.662 [2024-11-17 00:47:20.475849] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6189555a-87f0-4f33-b0b0-2b9878ea5a18 00:15:28.662 [2024-11-17 00:47:20.475856] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:28.662 [2024-11-17 00:47:20.475862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:28.662 [2024-11-17 00:47:20.475880] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:28.662 [2024-11-17 00:47:20.475886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:28.662 [2024-11-17 00:47:20.475893] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:28.662 [2024-11-17 00:47:20.475899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:28.662 [2024-11-17 00:47:20.475913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:28.662 [2024-11-17 00:47:20.475918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:28.662 [2024-11-17 00:47:20.475924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:28.662 [2024-11-17 00:47:20.475930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.662 [2024-11-17 00:47:20.475937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:28.662 [2024-11-17 00:47:20.475944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:15:28.662 [2024-11-17 00:47:20.475951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.477386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.662 [2024-11-17 00:47:20.477407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:28.662 [2024-11-17 00:47:20.477414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:15:28.662 [2024-11-17 00:47:20.477421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.477503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.662 [2024-11-17 00:47:20.477511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:28.662 [2024-11-17 00:47:20.477518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:28.662 [2024-11-17 00:47:20.477525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.482363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.482392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:28.662 [2024-11-17 00:47:20.482399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.482407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.482458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.482466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:28.662 [2024-11-17 00:47:20.482472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.482480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.482546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.482560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:28.662 [2024-11-17 00:47:20.482566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.482589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.482610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.482617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:28.662 [2024-11-17 00:47:20.482623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.482630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.491265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.491298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:28.662 [2024-11-17 00:47:20.491306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.491313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.498419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.498453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:28.662 [2024-11-17 00:47:20.498461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.498469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.498540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.498550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:28.662 [2024-11-17 00:47:20.498558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.498564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.498627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.498636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:28.662 [2024-11-17 00:47:20.498642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.498649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.498713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.498721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:28.662 [2024-11-17 00:47:20.498727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.498736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.498771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.498779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:28.662 [2024-11-17 00:47:20.498785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.498792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.498828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.498837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:28.662 [2024-11-17 00:47:20.498843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.498851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.498902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.662 [2024-11-17 00:47:20.498911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:28.662 [2024-11-17 00:47:20.498917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.662 [2024-11-17 00:47:20.498924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.662 [2024-11-17 00:47:20.499062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.957 ms, result 0 00:15:28.662 true 00:15:28.662 00:47:20 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84392 00:15:28.662 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84392 ']' 00:15:28.662 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84392 00:15:28.662 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:28.663 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.663 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84392 00:15:28.663 killing process with pid 84392 00:15:28.663 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.663 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.663 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84392' 00:15:28.663 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84392 00:15:28.663 00:47:20 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84392 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:33.929 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:33.930 00:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:33.930 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:33.930 fio-3.35 00:15:33.930 Starting 1 thread 00:15:39.284 00:15:39.284 test: (groupid=0, jobs=1): err= 0: pid=84553: Sun Nov 17 00:47:30 2024 00:15:39.284 read: IOPS=873, BW=58.0MiB/s (60.8MB/s)(255MiB/4387msec) 00:15:39.284 slat (nsec): min=4133, max=29986, avg=7091.43, stdev=3430.37 00:15:39.284 clat (usec): min=273, max=1584, avg=522.94, stdev=255.86 00:15:39.284 lat (usec): min=278, max=1606, avg=530.03, stdev=258.41 00:15:39.284 clat percentiles (usec): 00:15:39.284 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 326], 20.00th=[ 330], 00:15:39.284 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 424], 00:15:39.284 | 70.00th=[ 619], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 963], 00:15:39.284 | 99.00th=[ 1090], 99.50th=[ 1188], 99.90th=[ 1303], 99.95th=[ 1401], 00:15:39.284 | 99.99th=[ 1582] 00:15:39.284 write: IOPS=879, BW=58.4MiB/s (61.2MB/s)(256MiB/4384msec); 0 zone resets 00:15:39.284 slat (usec): min=14, max=103, avg=21.35, stdev= 5.89 00:15:39.284 clat (usec): min=302, max=1843, avg=575.69, stdev=294.82 00:15:39.284 lat (usec): min=323, max=1868, avg=597.04, stdev=298.82 00:15:39.284 clat percentiles (usec): 00:15:39.284 | 1.00th=[ 343], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 355], 00:15:39.284 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 490], 00:15:39.284 | 70.00th=[ 725], 80.00th=[ 971], 90.00th=[ 996], 95.00th=[ 1029], 00:15:39.284 | 99.00th=[ 1319], 99.50th=[ 1696], 99.90th=[ 1795], 99.95th=[ 1811], 00:15:39.285 | 99.99th=[ 1844] 00:15:39.285 bw ( KiB/s): min=34816, max=86496, per=95.10%, avg=56882.00, stdev=24209.06, samples=8 00:15:39.285 iops : min= 512, max= 1272, avg=836.50, stdev=356.02, samples=8 00:15:39.285 lat (usec) : 500=64.59%, 750=6.85%, 1000=23.62% 00:15:39.285 lat (msec) : 2=4.94% 00:15:39.285 cpu : usr=98.95%, sys=0.18%, ctx=13, majf=0, minf=1181 00:15:39.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.285 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.285 00:15:39.285 Run status group 0 (all jobs): 00:15:39.285 READ: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=255MiB (267MB), run=4387-4387msec 00:15:39.285 WRITE: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=256MiB (269MB), run=4384-4384msec 00:15:39.285 ----------------------------------------------------- 00:15:39.285 Suppressions used: 00:15:39.285 count bytes template 00:15:39.285 1 5 /usr/src/fio/parse.c 00:15:39.285 1 8 libtcmalloc_minimal.so 00:15:39.285 1 904 libcrypto.so 00:15:39.285 ----------------------------------------------------- 00:15:39.285 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:39.285 00:47:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.544 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:39.544 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:39.544 fio-3.35 00:15:39.544 Starting 2 threads 00:16:06.081 00:16:06.082 first_half: (groupid=0, jobs=1): err= 0: pid=84654: Sun Nov 17 00:47:54 2024 00:16:06.082 read: IOPS=2954, BW=11.5MiB/s (12.1MB/s)(255MiB/22083msec) 00:16:06.082 slat (nsec): min=2987, max=19351, avg=3873.24, stdev=812.29 00:16:06.082 clat (usec): min=636, max=268986, avg=34076.53, stdev=16233.87 00:16:06.082 lat (usec): min=640, max=268990, avg=34080.41, stdev=16233.99 00:16:06.082 clat percentiles (msec): 00:16:06.082 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 31], 00:16:06.082 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:06.082 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:16:06.082 | 99.00th=[ 126], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 215], 00:16:06.082 | 99.99th=[ 262] 00:16:06.082 write: IOPS=3854, BW=15.1MiB/s (15.8MB/s)(256MiB/17002msec); 0 zone resets 00:16:06.082 slat (usec): min=3, max=672, avg= 5.51, stdev= 4.37 00:16:06.082 clat (usec): min=371, max=76732, avg=9156.09, stdev=15343.83 00:16:06.082 lat (usec): min=377, max=76738, avg=9161.60, stdev=15343.83 00:16:06.082 clat percentiles (usec): 00:16:06.082 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 848], 20.00th=[ 1123], 00:16:06.082 | 30.00th=[ 2409], 40.00th=[ 3687], 50.00th=[ 4752], 60.00th=[ 5407], 00:16:06.082 | 70.00th=[ 6325], 80.00th=[10028], 90.00th=[13698], 95.00th=[59507], 00:16:06.082 | 99.00th=[66323], 99.50th=[69731], 99.90th=[73925], 99.95th=[74974], 00:16:06.082 | 99.99th=[76022] 00:16:06.082 bw ( KiB/s): min= 800, max=40816, per=89.66%, avg=24966.10, stdev=13432.64, samples=21 00:16:06.082 iops : min= 200, max=10204, avg=6241.52, stdev=3358.16, samples=21 00:16:06.082 lat (usec) : 500=0.03%, 750=2.50%, 1000=5.53% 00:16:06.082 lat (msec) : 2=6.25%, 4=7.29%, 10=18.96%, 20=6.30%, 50=47.33% 00:16:06.082 lat (msec) : 100=4.87%, 250=0.93%, 500=0.01% 00:16:06.082 cpu : usr=99.37%, sys=0.13%, ctx=47, majf=0, minf=5599 00:16:06.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:06.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.082 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.082 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.082 second_half: (groupid=0, jobs=1): err= 0: pid=84655: Sun Nov 17 00:47:54 2024 00:16:06.082 read: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(255MiB/22222msec) 00:16:06.082 slat (nsec): min=3014, max=59132, avg=5149.29, stdev=1054.59 00:16:06.082 clat (usec): min=558, max=273039, avg=33621.95, stdev=17573.40 00:16:06.082 lat (usec): min=563, max=273044, avg=33627.10, stdev=17573.47 00:16:06.082 clat percentiles (msec): 00:16:06.082 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 31], 00:16:06.082 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:06.082 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:16:06.082 | 99.00th=[ 128], 99.50th=[ 155], 99.90th=[ 197], 99.95th=[ 224], 00:16:06.082 | 99.99th=[ 268] 00:16:06.082 write: IOPS=3480, BW=13.6MiB/s (14.3MB/s)(256MiB/18828msec); 0 zone resets 00:16:06.082 slat (nsec): min=3857, max=73941, avg=6778.76, stdev=2806.90 00:16:06.082 clat (usec): min=360, max=77477, avg=9906.96, stdev=16110.78 00:16:06.082 lat (usec): min=368, max=77483, avg=9913.74, stdev=16110.86 00:16:06.082 clat percentiles (usec): 00:16:06.082 | 1.00th=[ 652], 5.00th=[ 742], 10.00th=[ 824], 20.00th=[ 1074], 00:16:06.082 | 30.00th=[ 1876], 40.00th=[ 3359], 50.00th=[ 4293], 60.00th=[ 5211], 00:16:06.082 | 70.00th=[ 6390], 80.00th=[10945], 90.00th=[29754], 95.00th=[60031], 00:16:06.082 | 99.00th=[66847], 99.50th=[69731], 99.90th=[74974], 99.95th=[76022], 00:16:06.082 | 99.99th=[77071] 00:16:06.082 bw ( KiB/s): min= 528, max=45128, per=81.87%, avg=22798.65, stdev=14454.04, samples=23 00:16:06.082 iops : min= 132, max=11282, avg=5699.65, stdev=3613.50, samples=23 00:16:06.082 lat (usec) : 500=0.01%, 750=2.84%, 1000=6.02% 00:16:06.082 lat (msec) : 2=6.54%, 4=8.28%, 10=16.58%, 20=5.83%, 50=48.01% 00:16:06.082 lat (msec) : 100=4.91%, 250=0.97%, 500=0.01% 00:16:06.082 cpu : usr=99.20%, sys=0.14%, ctx=43, majf=0, minf=5541 00:16:06.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:06.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.082 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.082 issued rwts: total=65249,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.082 00:16:06.082 Run status group 0 (all jobs): 00:16:06.082 READ: bw=22.9MiB/s (24.1MB/s), 11.5MiB/s-11.5MiB/s (12.0MB/s-12.1MB/s), io=510MiB (534MB), run=22083-22222msec 00:16:06.082 WRITE: bw=27.2MiB/s (28.5MB/s), 13.6MiB/s-15.1MiB/s (14.3MB/s-15.8MB/s), io=512MiB (537MB), run=17002-18828msec 00:16:06.082 ----------------------------------------------------- 00:16:06.082 Suppressions used: 00:16:06.082 count bytes template 00:16:06.082 2 10 /usr/src/fio/parse.c 00:16:06.082 2 192 /usr/src/fio/iolog.c 00:16:06.082 1 8 libtcmalloc_minimal.so 00:16:06.082 1 904 libcrypto.so 00:16:06.082 ----------------------------------------------------- 00:16:06.082 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.082 00:47:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:06.082 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.082 fio-3.35 00:16:06.082 Starting 1 thread 00:16:20.967 00:16:20.967 test: (groupid=0, jobs=1): err= 0: pid=84946: Sun Nov 17 00:48:11 2024 00:16:20.967 read: IOPS=7625, BW=29.8MiB/s (31.2MB/s)(255MiB/8550msec) 00:16:20.967 slat (nsec): min=2909, max=21292, avg=3335.31, stdev=610.61 00:16:20.967 clat (usec): min=1351, max=33767, avg=16777.94, stdev=1810.20 00:16:20.967 lat (usec): min=1357, max=33770, avg=16781.28, stdev=1810.21 00:16:20.967 clat percentiles (usec): 00:16:20.967 | 1.00th=[14353], 5.00th=[15008], 10.00th=[15270], 20.00th=[15664], 00:16:20.967 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:16:20.967 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[20841], 00:16:20.967 | 99.00th=[23725], 99.50th=[25297], 99.90th=[27919], 99.95th=[29492], 00:16:20.967 | 99.99th=[32637] 00:16:20.967 write: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(256MiB/6115msec); 0 zone resets 00:16:20.967 slat (usec): min=4, max=359, avg= 5.64, stdev= 3.85 00:16:20.967 clat (usec): min=519, max=66375, avg=11890.91, stdev=12778.58 00:16:20.967 lat (usec): min=523, max=66380, avg=11896.55, stdev=12778.55 00:16:20.967 clat percentiles (usec): 00:16:20.967 | 1.00th=[ 758], 5.00th=[ 1057], 10.00th=[ 1205], 20.00th=[ 1434], 00:16:20.967 | 30.00th=[ 1762], 40.00th=[ 2573], 50.00th=[ 9372], 60.00th=[11731], 00:16:20.967 | 70.00th=[13698], 80.00th=[16188], 90.00th=[36439], 95.00th=[41157], 00:16:20.967 | 99.00th=[45876], 99.50th=[46924], 99.90th=[51119], 99.95th=[55313], 00:16:20.967 | 99.99th=[61604] 00:16:20.967 bw ( KiB/s): min= 7728, max=49960, per=94.06%, avg=40323.69, stdev=10914.68, samples=13 00:16:20.967 iops : min= 1932, max=12490, avg=10080.92, stdev=2728.65, samples=13 00:16:20.967 lat (usec) : 750=0.46%, 1000=1.51% 00:16:20.967 lat (msec) : 2=15.29%, 4=3.68%, 10=5.56%, 20=62.17%, 50=11.23% 00:16:20.967 lat (msec) : 100=0.09% 00:16:20.967 cpu : usr=99.23%, sys=0.11%, ctx=21, majf=0, minf=5577 00:16:20.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:20.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.967 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.967 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.967 00:16:20.967 Run status group 0 (all jobs): 00:16:20.967 READ: bw=29.8MiB/s (31.2MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=255MiB (267MB), run=8550-8550msec 00:16:20.967 WRITE: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=256MiB (268MB), run=6115-6115msec 00:16:20.967 ----------------------------------------------------- 00:16:20.967 Suppressions used: 00:16:20.967 count bytes template 00:16:20.967 1 5 /usr/src/fio/parse.c 00:16:20.967 2 192 /usr/src/fio/iolog.c 00:16:20.967 1 8 libtcmalloc_minimal.so 00:16:20.967 1 904 libcrypto.so 00:16:20.967 ----------------------------------------------------- 00:16:20.967 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.967 Remove shared memory files 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69843 /dev/shm/spdk_tgt_trace.pid83337 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:20.967 ************************************ 00:16:20.967 END TEST ftl_fio_basic 00:16:20.967 ************************************ 00:16:20.967 00:16:20.967 real 0m58.553s 00:16:20.967 user 2m1.132s 00:16:20.967 sys 0m10.977s 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.967 00:48:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:20.967 00:48:12 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:20.967 00:48:12 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:20.967 00:48:12 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.967 00:48:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:20.967 ************************************ 00:16:20.967 START TEST ftl_bdevperf 00:16:20.967 ************************************ 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:20.967 * Looking for test storage... 00:16:20.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:20.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.967 --rc genhtml_branch_coverage=1 00:16:20.967 --rc genhtml_function_coverage=1 00:16:20.967 --rc genhtml_legend=1 00:16:20.967 --rc geninfo_all_blocks=1 00:16:20.967 --rc geninfo_unexecuted_blocks=1 00:16:20.967 00:16:20.967 ' 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:20.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.967 --rc genhtml_branch_coverage=1 00:16:20.967 --rc genhtml_function_coverage=1 00:16:20.967 --rc genhtml_legend=1 00:16:20.967 --rc geninfo_all_blocks=1 00:16:20.967 --rc geninfo_unexecuted_blocks=1 00:16:20.967 00:16:20.967 ' 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:20.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.967 --rc genhtml_branch_coverage=1 00:16:20.967 --rc genhtml_function_coverage=1 00:16:20.967 --rc genhtml_legend=1 00:16:20.967 --rc geninfo_all_blocks=1 00:16:20.967 --rc geninfo_unexecuted_blocks=1 00:16:20.967 00:16:20.967 ' 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:20.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.967 --rc genhtml_branch_coverage=1 00:16:20.967 --rc genhtml_function_coverage=1 00:16:20.967 --rc genhtml_legend=1 00:16:20.967 --rc geninfo_all_blocks=1 00:16:20.967 --rc geninfo_unexecuted_blocks=1 00:16:20.967 00:16:20.967 ' 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:20.967 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85191 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85191 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 85191 ']' 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.968 00:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:20.968 [2024-11-17 00:48:12.532063] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:20.968 [2024-11-17 00:48:12.532370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85191 ] 00:16:20.968 [2024-11-17 00:48:12.678683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.968 [2024-11-17 00:48:12.714387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:21.541 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:21.802 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:22.063 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:22.063 { 00:16:22.063 "name": "nvme0n1", 00:16:22.063 "aliases": [ 00:16:22.063 "ba82b5db-5d6b-4e47-a9a9-1aeb4872e9cb" 00:16:22.063 ], 00:16:22.063 "product_name": "NVMe disk", 00:16:22.063 "block_size": 4096, 00:16:22.063 "num_blocks": 1310720, 00:16:22.063 "uuid": "ba82b5db-5d6b-4e47-a9a9-1aeb4872e9cb", 00:16:22.063 "numa_id": -1, 00:16:22.063 "assigned_rate_limits": { 00:16:22.063 "rw_ios_per_sec": 0, 00:16:22.063 "rw_mbytes_per_sec": 0, 00:16:22.063 "r_mbytes_per_sec": 0, 00:16:22.063 "w_mbytes_per_sec": 0 00:16:22.063 }, 00:16:22.063 "claimed": true, 00:16:22.063 "claim_type": "read_many_write_one", 00:16:22.063 "zoned": false, 00:16:22.063 "supported_io_types": { 00:16:22.063 "read": true, 00:16:22.063 "write": true, 00:16:22.063 "unmap": true, 00:16:22.063 "flush": true, 00:16:22.063 "reset": true, 00:16:22.064 "nvme_admin": true, 00:16:22.064 "nvme_io": true, 00:16:22.064 "nvme_io_md": false, 00:16:22.064 "write_zeroes": true, 00:16:22.064 "zcopy": false, 00:16:22.064 "get_zone_info": false, 00:16:22.064 "zone_management": false, 00:16:22.064 "zone_append": false, 00:16:22.064 "compare": true, 00:16:22.064 "compare_and_write": false, 00:16:22.064 "abort": true, 00:16:22.064 "seek_hole": false, 00:16:22.064 "seek_data": false, 00:16:22.064 "copy": true, 00:16:22.064 "nvme_iov_md": false 00:16:22.064 }, 00:16:22.064 "driver_specific": { 00:16:22.064 "nvme": [ 00:16:22.064 { 00:16:22.064 "pci_address": "0000:00:11.0", 00:16:22.064 "trid": { 00:16:22.064 "trtype": "PCIe", 00:16:22.064 "traddr": "0000:00:11.0" 00:16:22.064 }, 00:16:22.064 "ctrlr_data": { 00:16:22.064 "cntlid": 0, 00:16:22.064 "vendor_id": "0x1b36", 00:16:22.064 "model_number": "QEMU NVMe Ctrl", 00:16:22.064 "serial_number": "12341", 00:16:22.064 "firmware_revision": "8.0.0", 00:16:22.064 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:22.064 "oacs": { 00:16:22.064 "security": 0, 00:16:22.064 "format": 1, 00:16:22.064 "firmware": 0, 00:16:22.064 "ns_manage": 1 00:16:22.064 }, 00:16:22.064 "multi_ctrlr": false, 00:16:22.064 "ana_reporting": false 00:16:22.064 }, 00:16:22.064 "vs": { 00:16:22.064 "nvme_version": "1.4" 00:16:22.064 }, 00:16:22.064 "ns_data": { 00:16:22.064 "id": 1, 00:16:22.064 "can_share": false 00:16:22.064 } 00:16:22.064 } 00:16:22.064 ], 00:16:22.064 "mp_policy": "active_passive" 00:16:22.064 } 00:16:22.064 } 00:16:22.064 ]' 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:22.064 00:48:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:22.325 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=56bd1c26-5d53-4efc-8071-122538257bbc 00:16:22.325 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:22.325 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56bd1c26-5d53-4efc-8071-122538257bbc 00:16:22.325 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:22.586 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c9ae35bf-1c0e-48ea-b517-4e3633220b37 00:16:22.586 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c9ae35bf-1c0e-48ea-b517-4e3633220b37 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:22.845 00:48:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:23.104 { 00:16:23.104 "name": "ec7ad736-c4ae-4594-a1aa-2c02674b033c", 00:16:23.104 "aliases": [ 00:16:23.104 "lvs/nvme0n1p0" 00:16:23.104 ], 00:16:23.104 "product_name": "Logical Volume", 00:16:23.104 "block_size": 4096, 00:16:23.104 "num_blocks": 26476544, 00:16:23.104 "uuid": "ec7ad736-c4ae-4594-a1aa-2c02674b033c", 00:16:23.104 "assigned_rate_limits": { 00:16:23.104 "rw_ios_per_sec": 0, 00:16:23.104 "rw_mbytes_per_sec": 0, 00:16:23.104 "r_mbytes_per_sec": 0, 00:16:23.104 "w_mbytes_per_sec": 0 00:16:23.104 }, 00:16:23.104 "claimed": false, 00:16:23.104 "zoned": false, 00:16:23.104 "supported_io_types": { 00:16:23.104 "read": true, 00:16:23.104 "write": true, 00:16:23.104 "unmap": true, 00:16:23.104 "flush": false, 00:16:23.104 "reset": true, 00:16:23.104 "nvme_admin": false, 00:16:23.104 "nvme_io": false, 00:16:23.104 "nvme_io_md": false, 00:16:23.104 "write_zeroes": true, 00:16:23.104 "zcopy": false, 00:16:23.104 "get_zone_info": false, 00:16:23.104 "zone_management": false, 00:16:23.104 "zone_append": false, 00:16:23.104 "compare": false, 00:16:23.104 "compare_and_write": false, 00:16:23.104 "abort": false, 00:16:23.104 "seek_hole": true, 00:16:23.104 "seek_data": true, 00:16:23.104 "copy": false, 00:16:23.104 "nvme_iov_md": false 00:16:23.104 }, 00:16:23.104 "driver_specific": { 00:16:23.104 "lvol": { 00:16:23.104 "lvol_store_uuid": "c9ae35bf-1c0e-48ea-b517-4e3633220b37", 00:16:23.104 "base_bdev": "nvme0n1", 00:16:23.104 "thin_provision": true, 00:16:23.104 "num_allocated_clusters": 0, 00:16:23.104 "snapshot": false, 00:16:23.104 "clone": false, 00:16:23.104 "esnap_clone": false 00:16:23.104 } 00:16:23.104 } 00:16:23.104 } 00:16:23.104 ]' 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:23.104 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:23.362 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:23.621 { 00:16:23.621 "name": "ec7ad736-c4ae-4594-a1aa-2c02674b033c", 00:16:23.621 "aliases": [ 00:16:23.621 "lvs/nvme0n1p0" 00:16:23.621 ], 00:16:23.621 "product_name": "Logical Volume", 00:16:23.621 "block_size": 4096, 00:16:23.621 "num_blocks": 26476544, 00:16:23.621 "uuid": "ec7ad736-c4ae-4594-a1aa-2c02674b033c", 00:16:23.621 "assigned_rate_limits": { 00:16:23.621 "rw_ios_per_sec": 0, 00:16:23.621 "rw_mbytes_per_sec": 0, 00:16:23.621 "r_mbytes_per_sec": 0, 00:16:23.621 "w_mbytes_per_sec": 0 00:16:23.621 }, 00:16:23.621 "claimed": false, 00:16:23.621 "zoned": false, 00:16:23.621 "supported_io_types": { 00:16:23.621 "read": true, 00:16:23.621 "write": true, 00:16:23.621 "unmap": true, 00:16:23.621 "flush": false, 00:16:23.621 "reset": true, 00:16:23.621 "nvme_admin": false, 00:16:23.621 "nvme_io": false, 00:16:23.621 "nvme_io_md": false, 00:16:23.621 "write_zeroes": true, 00:16:23.621 "zcopy": false, 00:16:23.621 "get_zone_info": false, 00:16:23.621 "zone_management": false, 00:16:23.621 "zone_append": false, 00:16:23.621 "compare": false, 00:16:23.621 "compare_and_write": false, 00:16:23.621 "abort": false, 00:16:23.621 "seek_hole": true, 00:16:23.621 "seek_data": true, 00:16:23.621 "copy": false, 00:16:23.621 "nvme_iov_md": false 00:16:23.621 }, 00:16:23.621 "driver_specific": { 00:16:23.621 "lvol": { 00:16:23.621 "lvol_store_uuid": "c9ae35bf-1c0e-48ea-b517-4e3633220b37", 00:16:23.621 "base_bdev": "nvme0n1", 00:16:23.621 "thin_provision": true, 00:16:23.621 "num_allocated_clusters": 0, 00:16:23.621 "snapshot": false, 00:16:23.621 "clone": false, 00:16:23.621 "esnap_clone": false 00:16:23.621 } 00:16:23.621 } 00:16:23.621 } 00:16:23.621 ]' 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:23.621 00:48:15 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:23.880 00:48:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:23.880 00:48:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:23.880 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:23.880 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:23.880 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:23.880 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:23.880 00:48:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec7ad736-c4ae-4594-a1aa-2c02674b033c 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:24.138 { 00:16:24.138 "name": "ec7ad736-c4ae-4594-a1aa-2c02674b033c", 00:16:24.138 "aliases": [ 00:16:24.138 "lvs/nvme0n1p0" 00:16:24.138 ], 00:16:24.138 "product_name": "Logical Volume", 00:16:24.138 "block_size": 4096, 00:16:24.138 "num_blocks": 26476544, 00:16:24.138 "uuid": "ec7ad736-c4ae-4594-a1aa-2c02674b033c", 00:16:24.138 "assigned_rate_limits": { 00:16:24.138 "rw_ios_per_sec": 0, 00:16:24.138 "rw_mbytes_per_sec": 0, 00:16:24.138 "r_mbytes_per_sec": 0, 00:16:24.138 "w_mbytes_per_sec": 0 00:16:24.138 }, 00:16:24.138 "claimed": false, 00:16:24.138 "zoned": false, 00:16:24.138 "supported_io_types": { 00:16:24.138 "read": true, 00:16:24.138 "write": true, 00:16:24.138 "unmap": true, 00:16:24.138 "flush": false, 00:16:24.138 "reset": true, 00:16:24.138 "nvme_admin": false, 00:16:24.138 "nvme_io": false, 00:16:24.138 "nvme_io_md": false, 00:16:24.138 "write_zeroes": true, 00:16:24.138 "zcopy": false, 00:16:24.138 "get_zone_info": false, 00:16:24.138 "zone_management": false, 00:16:24.138 "zone_append": false, 00:16:24.138 "compare": false, 00:16:24.138 "compare_and_write": false, 00:16:24.138 "abort": false, 00:16:24.138 "seek_hole": true, 00:16:24.138 "seek_data": true, 00:16:24.138 "copy": false, 00:16:24.138 "nvme_iov_md": false 00:16:24.138 }, 00:16:24.138 "driver_specific": { 00:16:24.138 "lvol": { 00:16:24.138 "lvol_store_uuid": "c9ae35bf-1c0e-48ea-b517-4e3633220b37", 00:16:24.138 "base_bdev": "nvme0n1", 00:16:24.138 "thin_provision": true, 00:16:24.138 "num_allocated_clusters": 0, 00:16:24.138 "snapshot": false, 00:16:24.138 "clone": false, 00:16:24.138 "esnap_clone": false 00:16:24.138 } 00:16:24.138 } 00:16:24.138 } 00:16:24.138 ]' 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:24.138 00:48:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ec7ad736-c4ae-4594-a1aa-2c02674b033c -c nvc0n1p0 --l2p_dram_limit 20 00:16:24.397 [2024-11-17 00:48:16.292808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.292845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.397 [2024-11-17 00:48:16.292857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:24.397 [2024-11-17 00:48:16.292866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.292906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.292913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.397 [2024-11-17 00:48:16.292925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:24.397 [2024-11-17 00:48:16.292932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.292946] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.397 [2024-11-17 00:48:16.293121] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.397 [2024-11-17 00:48:16.293137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.293144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.397 [2024-11-17 00:48:16.293152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:24.397 [2024-11-17 00:48:16.293158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.293184] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 770ebcb6-0e0c-4532-a2e0-866133c50477 00:16:24.397 [2024-11-17 00:48:16.294155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.294188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:24.397 [2024-11-17 00:48:16.294197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:24.397 [2024-11-17 00:48:16.294205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.298926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.298956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.397 [2024-11-17 00:48:16.298963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.658 ms 00:16:24.397 [2024-11-17 00:48:16.298972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.299023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.299031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.397 [2024-11-17 00:48:16.299038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:24.397 [2024-11-17 00:48:16.299045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.299069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.299079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.397 [2024-11-17 00:48:16.299085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.397 [2024-11-17 00:48:16.299092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.299107] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:24.397 [2024-11-17 00:48:16.300395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.300421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.397 [2024-11-17 00:48:16.300430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:16:24.397 [2024-11-17 00:48:16.300435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.300462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.300469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.397 [2024-11-17 00:48:16.300478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:24.397 [2024-11-17 00:48:16.300485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.300503] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:24.397 [2024-11-17 00:48:16.300635] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:24.397 [2024-11-17 00:48:16.300650] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.397 [2024-11-17 00:48:16.300658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:24.397 [2024-11-17 00:48:16.300668] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.397 [2024-11-17 00:48:16.300675] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.397 [2024-11-17 00:48:16.300682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:24.397 [2024-11-17 00:48:16.300688] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.397 [2024-11-17 00:48:16.300695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:24.397 [2024-11-17 00:48:16.300701] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:24.397 [2024-11-17 00:48:16.300708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.300717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.397 [2024-11-17 00:48:16.300728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:16:24.397 [2024-11-17 00:48:16.300734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.300799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.397 [2024-11-17 00:48:16.300806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.397 [2024-11-17 00:48:16.300813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:24.397 [2024-11-17 00:48:16.300822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.397 [2024-11-17 00:48:16.300893] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.397 [2024-11-17 00:48:16.300911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.397 [2024-11-17 00:48:16.300918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.397 [2024-11-17 00:48:16.300926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.397 [2024-11-17 00:48:16.300933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.397 [2024-11-17 00:48:16.300938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.397 [2024-11-17 00:48:16.300944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:24.397 [2024-11-17 00:48:16.300949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.397 [2024-11-17 00:48:16.300956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:24.397 [2024-11-17 00:48:16.300961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.397 [2024-11-17 00:48:16.300968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.397 [2024-11-17 00:48:16.300973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:24.397 [2024-11-17 00:48:16.300980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.397 [2024-11-17 00:48:16.300985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.397 [2024-11-17 00:48:16.300992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:24.397 [2024-11-17 00:48:16.300997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.397 [2024-11-17 00:48:16.301004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.397 [2024-11-17 00:48:16.301008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:24.397 [2024-11-17 00:48:16.301016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.397 [2024-11-17 00:48:16.301021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.397 [2024-11-17 00:48:16.301028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:24.397 [2024-11-17 00:48:16.301032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.397 [2024-11-17 00:48:16.301038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.397 [2024-11-17 00:48:16.301044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:24.397 [2024-11-17 00:48:16.301051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.397 [2024-11-17 00:48:16.301057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.397 [2024-11-17 00:48:16.301064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:24.397 [2024-11-17 00:48:16.301070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.397 [2024-11-17 00:48:16.301078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.397 [2024-11-17 00:48:16.301084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:24.397 [2024-11-17 00:48:16.301091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.397 [2024-11-17 00:48:16.301096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.397 [2024-11-17 00:48:16.301103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:24.397 [2024-11-17 00:48:16.301108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.397 [2024-11-17 00:48:16.301115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.398 [2024-11-17 00:48:16.301121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:24.398 [2024-11-17 00:48:16.301128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.398 [2024-11-17 00:48:16.301134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:24.398 [2024-11-17 00:48:16.301141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:24.398 [2024-11-17 00:48:16.301146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.398 [2024-11-17 00:48:16.301153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:24.398 [2024-11-17 00:48:16.301159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:24.398 [2024-11-17 00:48:16.301166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.398 [2024-11-17 00:48:16.301172] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.398 [2024-11-17 00:48:16.301181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.398 [2024-11-17 00:48:16.301188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.398 [2024-11-17 00:48:16.301195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.398 [2024-11-17 00:48:16.301201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.398 [2024-11-17 00:48:16.301209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.398 [2024-11-17 00:48:16.301214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.398 [2024-11-17 00:48:16.301222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.398 [2024-11-17 00:48:16.301228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.398 [2024-11-17 00:48:16.301236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.398 [2024-11-17 00:48:16.301244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.398 [2024-11-17 00:48:16.301256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.398 [2024-11-17 00:48:16.301264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:24.398 [2024-11-17 00:48:16.301272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:24.398 [2024-11-17 00:48:16.301278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:24.398 [2024-11-17 00:48:16.301286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:24.398 [2024-11-17 00:48:16.301293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:24.398 [2024-11-17 00:48:16.301302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:24.398 [2024-11-17 00:48:16.301308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:24.398 [2024-11-17 00:48:16.301315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:24.398 [2024-11-17 00:48:16.301322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:24.398 [2024-11-17 00:48:16.301329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:24.398 [2024-11-17 00:48:16.301335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:24.398 [2024-11-17 00:48:16.301343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:24.398 [2024-11-17 00:48:16.301350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:24.398 [2024-11-17 00:48:16.301377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:24.398 [2024-11-17 00:48:16.301384] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.398 [2024-11-17 00:48:16.301392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.398 [2024-11-17 00:48:16.301399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.398 [2024-11-17 00:48:16.301407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.398 [2024-11-17 00:48:16.301414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.398 [2024-11-17 00:48:16.301421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.398 [2024-11-17 00:48:16.301428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.398 [2024-11-17 00:48:16.301441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.398 [2024-11-17 00:48:16.301449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:16:24.398 [2024-11-17 00:48:16.301456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.398 [2024-11-17 00:48:16.301478] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:24.398 [2024-11-17 00:48:16.301487] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:28.599 [2024-11-17 00:48:20.044833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.045149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:28.599 [2024-11-17 00:48:20.045177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3743.338 ms 00:16:28.599 [2024-11-17 00:48:20.045190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.070325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.070431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:28.599 [2024-11-17 00:48:20.070454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.014 ms 00:16:28.599 [2024-11-17 00:48:20.070475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.070686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.070711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:28.599 [2024-11-17 00:48:20.070727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:28.599 [2024-11-17 00:48:20.070742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.083792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.083847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:28.599 [2024-11-17 00:48:20.083860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:16:28.599 [2024-11-17 00:48:20.083872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.083903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.083916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:28.599 [2024-11-17 00:48:20.083926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:28.599 [2024-11-17 00:48:20.083937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.084582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.084717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:28.599 [2024-11-17 00:48:20.084732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:16:28.599 [2024-11-17 00:48:20.084746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.084871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.084884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:28.599 [2024-11-17 00:48:20.084892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:28.599 [2024-11-17 00:48:20.084908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.093039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.093102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:28.599 [2024-11-17 00:48:20.093113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.113 ms 00:16:28.599 [2024-11-17 00:48:20.093124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.104616] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:28.599 [2024-11-17 00:48:20.112261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.112532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:28.599 [2024-11-17 00:48:20.112591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.057 ms 00:16:28.599 [2024-11-17 00:48:20.112601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.206317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.206401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:28.599 [2024-11-17 00:48:20.206421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.671 ms 00:16:28.599 [2024-11-17 00:48:20.206438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.599 [2024-11-17 00:48:20.206649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.599 [2024-11-17 00:48:20.206667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:28.599 [2024-11-17 00:48:20.206679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:16:28.600 [2024-11-17 00:48:20.206687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.212902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.212954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:28.600 [2024-11-17 00:48:20.212969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:16:28.600 [2024-11-17 00:48:20.212979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.218267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.218318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:28.600 [2024-11-17 00:48:20.218333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.233 ms 00:16:28.600 [2024-11-17 00:48:20.218341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.218698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.218712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:28.600 [2024-11-17 00:48:20.218731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:16:28.600 [2024-11-17 00:48:20.218740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.267110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.267164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:28.600 [2024-11-17 00:48:20.267179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.342 ms 00:16:28.600 [2024-11-17 00:48:20.267194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.274574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.274758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:28.600 [2024-11-17 00:48:20.274786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.300 ms 00:16:28.600 [2024-11-17 00:48:20.274796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.281081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.281132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:28.600 [2024-11-17 00:48:20.281145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:16:28.600 [2024-11-17 00:48:20.281154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.287517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.287568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:28.600 [2024-11-17 00:48:20.287584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:16:28.600 [2024-11-17 00:48:20.287592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.287643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.287653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:28.600 [2024-11-17 00:48:20.287668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:28.600 [2024-11-17 00:48:20.287680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.287791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.600 [2024-11-17 00:48:20.287807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:28.600 [2024-11-17 00:48:20.287818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:28.600 [2024-11-17 00:48:20.287827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.600 [2024-11-17 00:48:20.289229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3995.668 ms, result 0 00:16:28.600 { 00:16:28.600 "name": "ftl0", 00:16:28.600 "uuid": "770ebcb6-0e0c-4532-a2e0-866133c50477" 00:16:28.600 } 00:16:28.600 00:48:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:28.600 00:48:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:28.600 00:48:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:28.600 00:48:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:28.861 [2024-11-17 00:48:20.752836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:28.861 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:28.861 Zero copy mechanism will not be used. 00:16:28.861 Running I/O for 4 seconds... 00:16:30.750 650.00 IOPS, 43.16 MiB/s [2024-11-17T00:48:24.196Z] 702.00 IOPS, 46.62 MiB/s [2024-11-17T00:48:24.762Z] 694.33 IOPS, 46.11 MiB/s [2024-11-17T00:48:25.020Z] 974.00 IOPS, 64.68 MiB/s 00:16:32.957 Latency(us) 00:16:32.957 [2024-11-17T00:48:25.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.957 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:32.957 ftl0 : 4.00 973.74 64.66 0.00 0.00 1084.69 168.57 2785.28 00:16:32.957 [2024-11-17T00:48:25.020Z] =================================================================================================================== 00:16:32.957 [2024-11-17T00:48:25.020Z] Total : 973.74 64.66 0.00 0.00 1084.69 168.57 2785.28 00:16:32.957 [2024-11-17 00:48:24.761730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:32.957 { 00:16:32.957 "results": [ 00:16:32.957 { 00:16:32.957 "job": "ftl0", 00:16:32.957 "core_mask": "0x1", 00:16:32.957 "workload": "randwrite", 00:16:32.957 "status": "finished", 00:16:32.957 "queue_depth": 1, 00:16:32.957 "io_size": 69632, 00:16:32.957 "runtime": 4.002099, 00:16:32.957 "iops": 973.7390304437746, 00:16:32.957 "mibps": 64.6623574904069, 00:16:32.957 "io_failed": 0, 00:16:32.957 "io_timeout": 0, 00:16:32.957 "avg_latency_us": 1084.6856587907857, 00:16:32.957 "min_latency_us": 168.56615384615384, 00:16:32.957 "max_latency_us": 2785.28 00:16:32.957 } 00:16:32.957 ], 00:16:32.957 "core_count": 1 00:16:32.957 } 00:16:32.957 00:48:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:32.957 [2024-11-17 00:48:24.867564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:32.957 Running I/O for 4 seconds... 00:16:34.840 7176.00 IOPS, 28.03 MiB/s [2024-11-17T00:48:28.292Z] 6194.00 IOPS, 24.20 MiB/s [2024-11-17T00:48:29.235Z] 5873.33 IOPS, 22.94 MiB/s [2024-11-17T00:48:29.235Z] 5651.00 IOPS, 22.07 MiB/s 00:16:37.172 Latency(us) 00:16:37.172 [2024-11-17T00:48:29.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.172 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.172 ftl0 : 4.04 5631.03 22.00 0.00 0.00 22624.82 340.28 46177.67 00:16:37.172 [2024-11-17T00:48:29.235Z] =================================================================================================================== 00:16:37.172 [2024-11-17T00:48:29.235Z] Total : 5631.03 22.00 0.00 0.00 22624.82 0.00 46177.67 00:16:37.172 [2024-11-17 00:48:28.910844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:37.172 { 00:16:37.172 "results": [ 00:16:37.172 { 00:16:37.172 "job": "ftl0", 00:16:37.172 "core_mask": "0x1", 00:16:37.172 "workload": "randwrite", 00:16:37.172 "status": "finished", 00:16:37.172 "queue_depth": 128, 00:16:37.172 "io_size": 4096, 00:16:37.172 "runtime": 4.036916, 00:16:37.172 "iops": 5631.031212935815, 00:16:37.172 "mibps": 21.99621567553053, 00:16:37.172 "io_failed": 0, 00:16:37.172 "io_timeout": 0, 00:16:37.172 "avg_latency_us": 22624.81841064443, 00:16:37.172 "min_latency_us": 340.2830769230769, 00:16:37.172 "max_latency_us": 46177.67384615385 00:16:37.172 } 00:16:37.172 ], 00:16:37.172 "core_count": 1 00:16:37.172 } 00:16:37.172 00:48:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:37.172 [2024-11-17 00:48:29.020876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:37.172 Running I/O for 4 seconds... 00:16:39.120 4899.00 IOPS, 19.14 MiB/s [2024-11-17T00:48:32.122Z] 5224.00 IOPS, 20.41 MiB/s [2024-11-17T00:48:33.067Z] 5419.67 IOPS, 21.17 MiB/s [2024-11-17T00:48:33.067Z] 5260.00 IOPS, 20.55 MiB/s 00:16:41.004 Latency(us) 00:16:41.004 [2024-11-17T00:48:33.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.004 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:41.004 Verification LBA range: start 0x0 length 0x1400000 00:16:41.004 ftl0 : 4.01 5272.57 20.60 0.00 0.00 24208.56 223.70 39119.95 00:16:41.004 [2024-11-17T00:48:33.067Z] =================================================================================================================== 00:16:41.004 [2024-11-17T00:48:33.067Z] Total : 5272.57 20.60 0.00 0.00 24208.56 0.00 39119.95 00:16:41.004 [2024-11-17 00:48:33.042645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:41.004 { 00:16:41.004 "results": [ 00:16:41.004 { 00:16:41.004 "job": "ftl0", 00:16:41.004 "core_mask": "0x1", 00:16:41.004 "workload": "verify", 00:16:41.004 "status": "finished", 00:16:41.004 "verify_range": { 00:16:41.004 "start": 0, 00:16:41.004 "length": 20971520 00:16:41.004 }, 00:16:41.004 "queue_depth": 128, 00:16:41.004 "io_size": 4096, 00:16:41.004 "runtime": 4.01322, 00:16:41.004 "iops": 5272.574142459172, 00:16:41.004 "mibps": 20.595992743981142, 00:16:41.004 "io_failed": 0, 00:16:41.004 "io_timeout": 0, 00:16:41.004 "avg_latency_us": 24208.556391449758, 00:16:41.004 "min_latency_us": 223.70461538461538, 00:16:41.004 "max_latency_us": 39119.95076923077 00:16:41.004 } 00:16:41.004 ], 00:16:41.004 "core_count": 1 00:16:41.004 } 00:16:41.004 00:48:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:41.265 [2024-11-17 00:48:33.254984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.265 [2024-11-17 00:48:33.255049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:41.265 [2024-11-17 00:48:33.255068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:41.265 [2024-11-17 00:48:33.255077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.265 [2024-11-17 00:48:33.255102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:41.265 [2024-11-17 00:48:33.255859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.265 [2024-11-17 00:48:33.255915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:41.265 [2024-11-17 00:48:33.255928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:16:41.265 [2024-11-17 00:48:33.255943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.265 [2024-11-17 00:48:33.259458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.265 [2024-11-17 00:48:33.259512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:41.265 [2024-11-17 00:48:33.259524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.481 ms 00:16:41.265 [2024-11-17 00:48:33.259539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.482419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.482672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:41.529 [2024-11-17 00:48:33.482699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 222.852 ms 00:16:41.529 [2024-11-17 00:48:33.482718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.489078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.489129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:41.529 [2024-11-17 00:48:33.489141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.311 ms 00:16:41.529 [2024-11-17 00:48:33.489155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.492536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.492737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:41.529 [2024-11-17 00:48:33.492756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.305 ms 00:16:41.529 [2024-11-17 00:48:33.492766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.499615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.499808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:41.529 [2024-11-17 00:48:33.499828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:16:41.529 [2024-11-17 00:48:33.499847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.499985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.499998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:41.529 [2024-11-17 00:48:33.500008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:16:41.529 [2024-11-17 00:48:33.500018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.503460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.503515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:41.529 [2024-11-17 00:48:33.503526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.424 ms 00:16:41.529 [2024-11-17 00:48:33.503536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.506350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.506420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:41.529 [2024-11-17 00:48:33.506431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:16:41.529 [2024-11-17 00:48:33.506441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.508810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.508866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:41.529 [2024-11-17 00:48:33.508878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:16:41.529 [2024-11-17 00:48:33.508893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.511239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.529 [2024-11-17 00:48:33.511293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:41.529 [2024-11-17 00:48:33.511303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:16:41.529 [2024-11-17 00:48:33.511314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.529 [2024-11-17 00:48:33.511376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:41.529 [2024-11-17 00:48:33.511395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:41.529 [2024-11-17 00:48:33.511933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.511943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.511951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.511963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.511970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.511981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.511988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.511998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:41.530 [2024-11-17 00:48:33.512342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:41.530 [2024-11-17 00:48:33.512391] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 770ebcb6-0e0c-4532-a2e0-866133c50477 00:16:41.530 [2024-11-17 00:48:33.512402] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:41.530 [2024-11-17 00:48:33.512414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:41.530 [2024-11-17 00:48:33.512424] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:41.530 [2024-11-17 00:48:33.512432] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:41.530 [2024-11-17 00:48:33.512444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:41.530 [2024-11-17 00:48:33.512453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:41.530 [2024-11-17 00:48:33.512463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:41.530 [2024-11-17 00:48:33.512469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:41.530 [2024-11-17 00:48:33.512478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:41.530 [2024-11-17 00:48:33.512486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.530 [2024-11-17 00:48:33.512497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:41.530 [2024-11-17 00:48:33.512506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:16:41.530 [2024-11-17 00:48:33.512519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.514873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.530 [2024-11-17 00:48:33.514914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:41.530 [2024-11-17 00:48:33.514925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.335 ms 00:16:41.530 [2024-11-17 00:48:33.514939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.515065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.530 [2024-11-17 00:48:33.515078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:41.530 [2024-11-17 00:48:33.515088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:16:41.530 [2024-11-17 00:48:33.515100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.522322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.522433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.530 [2024-11-17 00:48:33.522445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.530 [2024-11-17 00:48:33.522456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.522521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.522533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.530 [2024-11-17 00:48:33.522546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.530 [2024-11-17 00:48:33.522558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.522641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.522655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.530 [2024-11-17 00:48:33.522666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.530 [2024-11-17 00:48:33.522681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.522697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.522710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.530 [2024-11-17 00:48:33.522718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.530 [2024-11-17 00:48:33.522731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.537234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.537493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.530 [2024-11-17 00:48:33.537516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.530 [2024-11-17 00:48:33.537527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.549808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.549992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.530 [2024-11-17 00:48:33.550055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.530 [2024-11-17 00:48:33.550082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.550232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.550267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.530 [2024-11-17 00:48:33.550350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.530 [2024-11-17 00:48:33.550401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.530 [2024-11-17 00:48:33.550476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.530 [2024-11-17 00:48:33.550503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.531 [2024-11-17 00:48:33.550524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.531 [2024-11-17 00:48:33.550593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.531 [2024-11-17 00:48:33.550698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.531 [2024-11-17 00:48:33.550889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.531 [2024-11-17 00:48:33.550957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.531 [2024-11-17 00:48:33.550983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.531 [2024-11-17 00:48:33.551036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.531 [2024-11-17 00:48:33.551062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:41.531 [2024-11-17 00:48:33.551082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.531 [2024-11-17 00:48:33.551105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.531 [2024-11-17 00:48:33.551157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.531 [2024-11-17 00:48:33.551182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.531 [2024-11-17 00:48:33.551211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.531 [2024-11-17 00:48:33.551233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.531 [2024-11-17 00:48:33.551371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.531 [2024-11-17 00:48:33.551408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.531 [2024-11-17 00:48:33.551437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.531 [2024-11-17 00:48:33.551462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.531 [2024-11-17 00:48:33.551620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 296.588 ms, result 0 00:16:41.531 true 00:16:41.531 00:48:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85191 00:16:41.531 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 85191 ']' 00:16:41.531 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 85191 00:16:41.531 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:41.531 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.531 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85191 00:16:41.792 killing process with pid 85191 00:16:41.792 Received shutdown signal, test time was about 4.000000 seconds 00:16:41.792 00:16:41.792 Latency(us) 00:16:41.792 [2024-11-17T00:48:33.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.792 [2024-11-17T00:48:33.855Z] =================================================================================================================== 00:16:41.792 [2024-11-17T00:48:33.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:41.792 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.792 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.792 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85191' 00:16:41.792 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 85191 00:16:41.792 00:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 85191 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:45.094 Remove shared memory files 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:45.094 ************************************ 00:16:45.094 END TEST ftl_bdevperf 00:16:45.094 ************************************ 00:16:45.094 00:16:45.094 real 0m24.374s 00:16:45.094 user 0m27.021s 00:16:45.094 sys 0m1.003s 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.094 00:48:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:45.094 00:48:36 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:45.094 00:48:36 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:45.094 00:48:36 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.094 00:48:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:45.094 ************************************ 00:16:45.094 START TEST ftl_trim 00:16:45.094 ************************************ 00:16:45.094 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:45.094 * Looking for test storage... 00:16:45.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:45.094 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:45.094 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:45.094 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:45.094 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:45.094 00:48:36 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.094 00:48:36 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.094 00:48:36 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.094 00:48:36 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.094 00:48:36 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.094 00:48:36 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.094 00:48:36 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.095 00:48:36 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:45.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.095 --rc genhtml_branch_coverage=1 00:16:45.095 --rc genhtml_function_coverage=1 00:16:45.095 --rc genhtml_legend=1 00:16:45.095 --rc geninfo_all_blocks=1 00:16:45.095 --rc geninfo_unexecuted_blocks=1 00:16:45.095 00:16:45.095 ' 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:45.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.095 --rc genhtml_branch_coverage=1 00:16:45.095 --rc genhtml_function_coverage=1 00:16:45.095 --rc genhtml_legend=1 00:16:45.095 --rc geninfo_all_blocks=1 00:16:45.095 --rc geninfo_unexecuted_blocks=1 00:16:45.095 00:16:45.095 ' 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:45.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.095 --rc genhtml_branch_coverage=1 00:16:45.095 --rc genhtml_function_coverage=1 00:16:45.095 --rc genhtml_legend=1 00:16:45.095 --rc geninfo_all_blocks=1 00:16:45.095 --rc geninfo_unexecuted_blocks=1 00:16:45.095 00:16:45.095 ' 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:45.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.095 --rc genhtml_branch_coverage=1 00:16:45.095 --rc genhtml_function_coverage=1 00:16:45.095 --rc genhtml_legend=1 00:16:45.095 --rc geninfo_all_blocks=1 00:16:45.095 --rc geninfo_unexecuted_blocks=1 00:16:45.095 00:16:45.095 ' 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85543 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85543 00:16:45.095 00:48:36 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85543 ']' 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.095 00:48:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:45.095 [2024-11-17 00:48:37.026260] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:45.095 [2024-11-17 00:48:37.026679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85543 ] 00:16:45.356 [2024-11-17 00:48:37.173054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.356 [2024-11-17 00:48:37.226338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.356 [2024-11-17 00:48:37.226738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.356 [2024-11-17 00:48:37.226745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.929 00:48:37 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.929 00:48:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:45.929 00:48:37 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:45.929 00:48:37 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:45.929 00:48:37 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:45.929 00:48:37 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:45.929 00:48:37 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:45.929 00:48:37 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:46.191 00:48:38 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:46.191 00:48:38 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:46.191 00:48:38 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:46.191 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:46.191 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:46.191 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:46.191 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:46.191 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:46.452 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:46.452 { 00:16:46.452 "name": "nvme0n1", 00:16:46.452 "aliases": [ 00:16:46.452 "e8cd09e2-e2cb-410d-b3dc-2e28a89a953f" 00:16:46.452 ], 00:16:46.452 "product_name": "NVMe disk", 00:16:46.452 "block_size": 4096, 00:16:46.452 "num_blocks": 1310720, 00:16:46.452 "uuid": "e8cd09e2-e2cb-410d-b3dc-2e28a89a953f", 00:16:46.452 "numa_id": -1, 00:16:46.452 "assigned_rate_limits": { 00:16:46.452 "rw_ios_per_sec": 0, 00:16:46.452 "rw_mbytes_per_sec": 0, 00:16:46.452 "r_mbytes_per_sec": 0, 00:16:46.452 "w_mbytes_per_sec": 0 00:16:46.452 }, 00:16:46.452 "claimed": true, 00:16:46.452 "claim_type": "read_many_write_one", 00:16:46.452 "zoned": false, 00:16:46.452 "supported_io_types": { 00:16:46.452 "read": true, 00:16:46.452 "write": true, 00:16:46.452 "unmap": true, 00:16:46.452 "flush": true, 00:16:46.452 "reset": true, 00:16:46.452 "nvme_admin": true, 00:16:46.452 "nvme_io": true, 00:16:46.452 "nvme_io_md": false, 00:16:46.452 "write_zeroes": true, 00:16:46.452 "zcopy": false, 00:16:46.452 "get_zone_info": false, 00:16:46.452 "zone_management": false, 00:16:46.452 "zone_append": false, 00:16:46.452 "compare": true, 00:16:46.452 "compare_and_write": false, 00:16:46.452 "abort": true, 00:16:46.452 "seek_hole": false, 00:16:46.452 "seek_data": false, 00:16:46.452 "copy": true, 00:16:46.452 "nvme_iov_md": false 00:16:46.452 }, 00:16:46.452 "driver_specific": { 00:16:46.452 "nvme": [ 00:16:46.452 { 00:16:46.452 "pci_address": "0000:00:11.0", 00:16:46.452 "trid": { 00:16:46.452 "trtype": "PCIe", 00:16:46.452 "traddr": "0000:00:11.0" 00:16:46.452 }, 00:16:46.452 "ctrlr_data": { 00:16:46.452 "cntlid": 0, 00:16:46.452 "vendor_id": "0x1b36", 00:16:46.452 "model_number": "QEMU NVMe Ctrl", 00:16:46.452 "serial_number": "12341", 00:16:46.452 "firmware_revision": "8.0.0", 00:16:46.452 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:46.452 "oacs": { 00:16:46.452 "security": 0, 00:16:46.452 "format": 1, 00:16:46.452 "firmware": 0, 00:16:46.452 "ns_manage": 1 00:16:46.452 }, 00:16:46.452 "multi_ctrlr": false, 00:16:46.452 "ana_reporting": false 00:16:46.452 }, 00:16:46.452 "vs": { 00:16:46.452 "nvme_version": "1.4" 00:16:46.452 }, 00:16:46.452 "ns_data": { 00:16:46.452 "id": 1, 00:16:46.452 "can_share": false 00:16:46.452 } 00:16:46.452 } 00:16:46.452 ], 00:16:46.452 "mp_policy": "active_passive" 00:16:46.452 } 00:16:46.452 } 00:16:46.452 ]' 00:16:46.452 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:46.452 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:46.452 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:46.452 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:46.452 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:46.452 00:48:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:46.452 00:48:38 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:46.452 00:48:38 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:46.452 00:48:38 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:46.452 00:48:38 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:46.452 00:48:38 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:46.714 00:48:38 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c9ae35bf-1c0e-48ea-b517-4e3633220b37 00:16:46.714 00:48:38 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:46.714 00:48:38 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9ae35bf-1c0e-48ea-b517-4e3633220b37 00:16:46.977 00:48:38 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:47.238 00:48:39 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=9a1d17b6-263d-4558-8fe4-ead9c9ad21a9 00:16:47.238 00:48:39 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9a1d17b6-263d-4558-8fe4-ead9c9ad21a9 00:16:47.499 00:48:39 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:47.499 00:48:39 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:47.499 00:48:39 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:47.499 00:48:39 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:47.499 00:48:39 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:47.499 00:48:39 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:47.499 00:48:39 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:47.499 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:47.499 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:47.499 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:47.499 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:47.500 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:47.759 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:47.759 { 00:16:47.759 "name": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:47.759 "aliases": [ 00:16:47.759 "lvs/nvme0n1p0" 00:16:47.759 ], 00:16:47.759 "product_name": "Logical Volume", 00:16:47.759 "block_size": 4096, 00:16:47.759 "num_blocks": 26476544, 00:16:47.759 "uuid": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:47.759 "assigned_rate_limits": { 00:16:47.759 "rw_ios_per_sec": 0, 00:16:47.759 "rw_mbytes_per_sec": 0, 00:16:47.759 "r_mbytes_per_sec": 0, 00:16:47.759 "w_mbytes_per_sec": 0 00:16:47.759 }, 00:16:47.759 "claimed": false, 00:16:47.759 "zoned": false, 00:16:47.759 "supported_io_types": { 00:16:47.759 "read": true, 00:16:47.759 "write": true, 00:16:47.759 "unmap": true, 00:16:47.759 "flush": false, 00:16:47.759 "reset": true, 00:16:47.759 "nvme_admin": false, 00:16:47.759 "nvme_io": false, 00:16:47.759 "nvme_io_md": false, 00:16:47.759 "write_zeroes": true, 00:16:47.759 "zcopy": false, 00:16:47.759 "get_zone_info": false, 00:16:47.759 "zone_management": false, 00:16:47.759 "zone_append": false, 00:16:47.759 "compare": false, 00:16:47.759 "compare_and_write": false, 00:16:47.759 "abort": false, 00:16:47.759 "seek_hole": true, 00:16:47.759 "seek_data": true, 00:16:47.759 "copy": false, 00:16:47.759 "nvme_iov_md": false 00:16:47.759 }, 00:16:47.759 "driver_specific": { 00:16:47.759 "lvol": { 00:16:47.759 "lvol_store_uuid": "9a1d17b6-263d-4558-8fe4-ead9c9ad21a9", 00:16:47.759 "base_bdev": "nvme0n1", 00:16:47.759 "thin_provision": true, 00:16:47.759 "num_allocated_clusters": 0, 00:16:47.759 "snapshot": false, 00:16:47.759 "clone": false, 00:16:47.759 "esnap_clone": false 00:16:47.759 } 00:16:47.759 } 00:16:47.759 } 00:16:47.759 ]' 00:16:47.759 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:47.759 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:47.759 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:47.759 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:47.759 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:47.759 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:47.759 00:48:39 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:47.759 00:48:39 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:47.759 00:48:39 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:48.017 00:48:39 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:48.017 00:48:39 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:48.018 00:48:39 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:48.018 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:48.018 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:48.018 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:48.018 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:48.018 00:48:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:48.277 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:48.277 { 00:16:48.277 "name": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:48.277 "aliases": [ 00:16:48.277 "lvs/nvme0n1p0" 00:16:48.277 ], 00:16:48.277 "product_name": "Logical Volume", 00:16:48.277 "block_size": 4096, 00:16:48.277 "num_blocks": 26476544, 00:16:48.277 "uuid": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:48.277 "assigned_rate_limits": { 00:16:48.277 "rw_ios_per_sec": 0, 00:16:48.277 "rw_mbytes_per_sec": 0, 00:16:48.277 "r_mbytes_per_sec": 0, 00:16:48.277 "w_mbytes_per_sec": 0 00:16:48.277 }, 00:16:48.277 "claimed": false, 00:16:48.277 "zoned": false, 00:16:48.277 "supported_io_types": { 00:16:48.277 "read": true, 00:16:48.277 "write": true, 00:16:48.277 "unmap": true, 00:16:48.277 "flush": false, 00:16:48.277 "reset": true, 00:16:48.277 "nvme_admin": false, 00:16:48.277 "nvme_io": false, 00:16:48.277 "nvme_io_md": false, 00:16:48.277 "write_zeroes": true, 00:16:48.277 "zcopy": false, 00:16:48.277 "get_zone_info": false, 00:16:48.277 "zone_management": false, 00:16:48.277 "zone_append": false, 00:16:48.277 "compare": false, 00:16:48.277 "compare_and_write": false, 00:16:48.277 "abort": false, 00:16:48.277 "seek_hole": true, 00:16:48.277 "seek_data": true, 00:16:48.277 "copy": false, 00:16:48.277 "nvme_iov_md": false 00:16:48.277 }, 00:16:48.277 "driver_specific": { 00:16:48.277 "lvol": { 00:16:48.277 "lvol_store_uuid": "9a1d17b6-263d-4558-8fe4-ead9c9ad21a9", 00:16:48.277 "base_bdev": "nvme0n1", 00:16:48.277 "thin_provision": true, 00:16:48.277 "num_allocated_clusters": 0, 00:16:48.277 "snapshot": false, 00:16:48.277 "clone": false, 00:16:48.277 "esnap_clone": false 00:16:48.277 } 00:16:48.277 } 00:16:48.277 } 00:16:48.277 ]' 00:16:48.277 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:48.277 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:48.277 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:48.277 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:48.277 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:48.277 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:48.277 00:48:40 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:48.277 00:48:40 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:48.535 00:48:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:48.535 00:48:40 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:48.535 00:48:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:48.535 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:48.535 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:48.535 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:48.535 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:48.535 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8cb3471-64c6-40aa-a5a3-4801d36974c0 00:16:48.535 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:48.535 { 00:16:48.536 "name": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:48.536 "aliases": [ 00:16:48.536 "lvs/nvme0n1p0" 00:16:48.536 ], 00:16:48.536 "product_name": "Logical Volume", 00:16:48.536 "block_size": 4096, 00:16:48.536 "num_blocks": 26476544, 00:16:48.536 "uuid": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:48.536 "assigned_rate_limits": { 00:16:48.536 "rw_ios_per_sec": 0, 00:16:48.536 "rw_mbytes_per_sec": 0, 00:16:48.536 "r_mbytes_per_sec": 0, 00:16:48.536 "w_mbytes_per_sec": 0 00:16:48.536 }, 00:16:48.536 "claimed": false, 00:16:48.536 "zoned": false, 00:16:48.536 "supported_io_types": { 00:16:48.536 "read": true, 00:16:48.536 "write": true, 00:16:48.536 "unmap": true, 00:16:48.536 "flush": false, 00:16:48.536 "reset": true, 00:16:48.536 "nvme_admin": false, 00:16:48.536 "nvme_io": false, 00:16:48.536 "nvme_io_md": false, 00:16:48.536 "write_zeroes": true, 00:16:48.536 "zcopy": false, 00:16:48.536 "get_zone_info": false, 00:16:48.536 "zone_management": false, 00:16:48.536 "zone_append": false, 00:16:48.536 "compare": false, 00:16:48.536 "compare_and_write": false, 00:16:48.536 "abort": false, 00:16:48.536 "seek_hole": true, 00:16:48.536 "seek_data": true, 00:16:48.536 "copy": false, 00:16:48.536 "nvme_iov_md": false 00:16:48.536 }, 00:16:48.536 "driver_specific": { 00:16:48.536 "lvol": { 00:16:48.536 "lvol_store_uuid": "9a1d17b6-263d-4558-8fe4-ead9c9ad21a9", 00:16:48.536 "base_bdev": "nvme0n1", 00:16:48.536 "thin_provision": true, 00:16:48.536 "num_allocated_clusters": 0, 00:16:48.536 "snapshot": false, 00:16:48.536 "clone": false, 00:16:48.536 "esnap_clone": false 00:16:48.536 } 00:16:48.536 } 00:16:48.536 } 00:16:48.536 ]' 00:16:48.536 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:48.795 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:48.795 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:48.795 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:48.795 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:48.795 00:48:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:48.795 00:48:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:48.795 00:48:40 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c8cb3471-64c6-40aa-a5a3-4801d36974c0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:48.795 [2024-11-17 00:48:40.818736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.818775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:48.795 [2024-11-17 00:48:40.818785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.795 [2024-11-17 00:48:40.818793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.795 [2024-11-17 00:48:40.820756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.820869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.795 [2024-11-17 00:48:40.820890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:16:48.795 [2024-11-17 00:48:40.820899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.795 [2024-11-17 00:48:40.820975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:48.795 [2024-11-17 00:48:40.821169] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:48.795 [2024-11-17 00:48:40.821180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.821187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.795 [2024-11-17 00:48:40.821194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:48.795 [2024-11-17 00:48:40.821201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.795 [2024-11-17 00:48:40.821288] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aa34e40b-328a-44ca-be31-18a041db8592 00:16:48.795 [2024-11-17 00:48:40.822294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.822320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:48.795 [2024-11-17 00:48:40.822330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:48.795 [2024-11-17 00:48:40.822351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.795 [2024-11-17 00:48:40.827592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.827690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.795 [2024-11-17 00:48:40.827704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:16:48.795 [2024-11-17 00:48:40.827710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.795 [2024-11-17 00:48:40.827814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.827821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.795 [2024-11-17 00:48:40.827829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:48.795 [2024-11-17 00:48:40.827844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.795 [2024-11-17 00:48:40.827877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.827885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:48.795 [2024-11-17 00:48:40.827892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:48.795 [2024-11-17 00:48:40.827897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.795 [2024-11-17 00:48:40.827930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:48.795 [2024-11-17 00:48:40.829238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.795 [2024-11-17 00:48:40.829266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.795 [2024-11-17 00:48:40.829273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:16:48.796 [2024-11-17 00:48:40.829280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.796 [2024-11-17 00:48:40.829316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.796 [2024-11-17 00:48:40.829326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:48.796 [2024-11-17 00:48:40.829332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:48.796 [2024-11-17 00:48:40.829340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.796 [2024-11-17 00:48:40.829389] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:48.796 [2024-11-17 00:48:40.829503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:48.796 [2024-11-17 00:48:40.829514] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:48.796 [2024-11-17 00:48:40.829524] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:48.796 [2024-11-17 00:48:40.829532] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:48.796 [2024-11-17 00:48:40.829541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:48.796 [2024-11-17 00:48:40.829546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:48.796 [2024-11-17 00:48:40.829553] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:48.796 [2024-11-17 00:48:40.829559] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:48.796 [2024-11-17 00:48:40.829575] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:48.796 [2024-11-17 00:48:40.829582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.796 [2024-11-17 00:48:40.829589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:48.796 [2024-11-17 00:48:40.829595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:48.796 [2024-11-17 00:48:40.829603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.796 [2024-11-17 00:48:40.829678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.796 [2024-11-17 00:48:40.829687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:48.796 [2024-11-17 00:48:40.829692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:48.796 [2024-11-17 00:48:40.829707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.796 [2024-11-17 00:48:40.829805] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:48.796 [2024-11-17 00:48:40.829821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:48.796 [2024-11-17 00:48:40.829835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.796 [2024-11-17 00:48:40.829842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:48.796 [2024-11-17 00:48:40.829855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:48.796 [2024-11-17 00:48:40.829868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:48.796 [2024-11-17 00:48:40.829873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.796 [2024-11-17 00:48:40.829884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:48.796 [2024-11-17 00:48:40.829891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:48.796 [2024-11-17 00:48:40.829897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.796 [2024-11-17 00:48:40.829907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:48.796 [2024-11-17 00:48:40.829913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:48.796 [2024-11-17 00:48:40.829921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:48.796 [2024-11-17 00:48:40.829934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:48.796 [2024-11-17 00:48:40.829940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:48.796 [2024-11-17 00:48:40.829953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.796 [2024-11-17 00:48:40.829966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:48.796 [2024-11-17 00:48:40.829973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.796 [2024-11-17 00:48:40.829985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:48.796 [2024-11-17 00:48:40.829990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:48.796 [2024-11-17 00:48:40.829997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.796 [2024-11-17 00:48:40.830003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:48.796 [2024-11-17 00:48:40.830011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:48.796 [2024-11-17 00:48:40.830016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.796 [2024-11-17 00:48:40.830023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:48.796 [2024-11-17 00:48:40.830029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:48.796 [2024-11-17 00:48:40.830036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.796 [2024-11-17 00:48:40.830042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:48.796 [2024-11-17 00:48:40.830049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:48.796 [2024-11-17 00:48:40.830054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.796 [2024-11-17 00:48:40.830061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:48.796 [2024-11-17 00:48:40.830067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:48.796 [2024-11-17 00:48:40.830073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.796 [2024-11-17 00:48:40.830079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:48.796 [2024-11-17 00:48:40.830086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:48.796 [2024-11-17 00:48:40.830091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.796 [2024-11-17 00:48:40.830098] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:48.796 [2024-11-17 00:48:40.830104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:48.796 [2024-11-17 00:48:40.830113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.796 [2024-11-17 00:48:40.830120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.796 [2024-11-17 00:48:40.830128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:48.796 [2024-11-17 00:48:40.830133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:48.796 [2024-11-17 00:48:40.830140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:48.796 [2024-11-17 00:48:40.830147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:48.796 [2024-11-17 00:48:40.830153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:48.796 [2024-11-17 00:48:40.830159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:48.796 [2024-11-17 00:48:40.830169] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:48.796 [2024-11-17 00:48:40.830177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.796 [2024-11-17 00:48:40.830186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:48.796 [2024-11-17 00:48:40.830192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:48.796 [2024-11-17 00:48:40.830199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:48.796 [2024-11-17 00:48:40.830205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:48.796 [2024-11-17 00:48:40.830214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:48.796 [2024-11-17 00:48:40.830220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:48.796 [2024-11-17 00:48:40.830229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:48.796 [2024-11-17 00:48:40.830235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:48.796 [2024-11-17 00:48:40.830243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:48.796 [2024-11-17 00:48:40.830248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:48.796 [2024-11-17 00:48:40.830256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:48.797 [2024-11-17 00:48:40.830262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:48.797 [2024-11-17 00:48:40.830269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:48.797 [2024-11-17 00:48:40.830275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:48.797 [2024-11-17 00:48:40.830281] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:48.797 [2024-11-17 00:48:40.830287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.797 [2024-11-17 00:48:40.830294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:48.797 [2024-11-17 00:48:40.830299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:48.797 [2024-11-17 00:48:40.830305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:48.797 [2024-11-17 00:48:40.830314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:48.797 [2024-11-17 00:48:40.830321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.797 [2024-11-17 00:48:40.830326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:48.797 [2024-11-17 00:48:40.830337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:16:48.797 [2024-11-17 00:48:40.830343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.797 [2024-11-17 00:48:40.830432] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:48.797 [2024-11-17 00:48:40.830448] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:51.325 [2024-11-17 00:48:43.175137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.175190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:51.325 [2024-11-17 00:48:43.175213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2344.690 ms 00:16:51.325 [2024-11-17 00:48:43.175225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.195762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.195902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.325 [2024-11-17 00:48:43.195961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.416 ms 00:16:51.325 [2024-11-17 00:48:43.196000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.196513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.196617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:51.325 [2024-11-17 00:48:43.196664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:16:51.325 [2024-11-17 00:48:43.196696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.210394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.210462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.325 [2024-11-17 00:48:43.210483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.585 ms 00:16:51.325 [2024-11-17 00:48:43.210496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.210616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.210634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.325 [2024-11-17 00:48:43.210650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.325 [2024-11-17 00:48:43.210661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.211045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.211080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.325 [2024-11-17 00:48:43.211098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:16:51.325 [2024-11-17 00:48:43.211111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.211314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.211329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.325 [2024-11-17 00:48:43.211345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:51.325 [2024-11-17 00:48:43.211389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.218132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.218175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.325 [2024-11-17 00:48:43.218192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.677 ms 00:16:51.325 [2024-11-17 00:48:43.218217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.227078] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:51.325 [2024-11-17 00:48:43.241556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.241592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:51.325 [2024-11-17 00:48:43.241602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.232 ms 00:16:51.325 [2024-11-17 00:48:43.241611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.295389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.295434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:51.325 [2024-11-17 00:48:43.295446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.692 ms 00:16:51.325 [2024-11-17 00:48:43.295459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.295633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.295646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:51.325 [2024-11-17 00:48:43.295657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:51.325 [2024-11-17 00:48:43.295666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.298499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.298535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:51.325 [2024-11-17 00:48:43.298546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:16:51.325 [2024-11-17 00:48:43.298555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.300975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.301109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:51.325 [2024-11-17 00:48:43.301125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.389 ms 00:16:51.325 [2024-11-17 00:48:43.301135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.301447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.301459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:51.325 [2024-11-17 00:48:43.301470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:51.325 [2024-11-17 00:48:43.301481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.327309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.327453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:51.325 [2024-11-17 00:48:43.327470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.788 ms 00:16:51.325 [2024-11-17 00:48:43.327479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.331327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.331464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:51.325 [2024-11-17 00:48:43.331478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:16:51.325 [2024-11-17 00:48:43.331490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.325 [2024-11-17 00:48:43.334367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.325 [2024-11-17 00:48:43.334468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:51.325 [2024-11-17 00:48:43.334528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:16:51.326 [2024-11-17 00:48:43.334555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.326 [2024-11-17 00:48:43.337946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.326 [2024-11-17 00:48:43.338048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:51.326 [2024-11-17 00:48:43.338103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:16:51.326 [2024-11-17 00:48:43.338130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.326 [2024-11-17 00:48:43.338190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.326 [2024-11-17 00:48:43.338215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:51.326 [2024-11-17 00:48:43.338235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.326 [2024-11-17 00:48:43.338257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.326 [2024-11-17 00:48:43.338434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.326 [2024-11-17 00:48:43.338487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:51.326 [2024-11-17 00:48:43.338531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:51.326 [2024-11-17 00:48:43.338555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.326 [2024-11-17 00:48:43.339421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.326 [2024-11-17 00:48:43.340481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2520.391 ms, result 0 00:16:51.326 [2024-11-17 00:48:43.341314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.326 { 00:16:51.326 "name": "ftl0", 00:16:51.326 "uuid": "aa34e40b-328a-44ca-be31-18a041db8592" 00:16:51.326 } 00:16:51.326 00:48:43 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:51.326 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:51.326 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:51.326 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:51.326 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:51.326 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:51.326 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:51.584 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:51.843 [ 00:16:51.843 { 00:16:51.843 "name": "ftl0", 00:16:51.843 "aliases": [ 00:16:51.843 "aa34e40b-328a-44ca-be31-18a041db8592" 00:16:51.843 ], 00:16:51.843 "product_name": "FTL disk", 00:16:51.843 "block_size": 4096, 00:16:51.843 "num_blocks": 23592960, 00:16:51.843 "uuid": "aa34e40b-328a-44ca-be31-18a041db8592", 00:16:51.843 "assigned_rate_limits": { 00:16:51.843 "rw_ios_per_sec": 0, 00:16:51.843 "rw_mbytes_per_sec": 0, 00:16:51.843 "r_mbytes_per_sec": 0, 00:16:51.843 "w_mbytes_per_sec": 0 00:16:51.843 }, 00:16:51.843 "claimed": false, 00:16:51.843 "zoned": false, 00:16:51.843 "supported_io_types": { 00:16:51.843 "read": true, 00:16:51.843 "write": true, 00:16:51.843 "unmap": true, 00:16:51.843 "flush": true, 00:16:51.843 "reset": false, 00:16:51.843 "nvme_admin": false, 00:16:51.843 "nvme_io": false, 00:16:51.843 "nvme_io_md": false, 00:16:51.843 "write_zeroes": true, 00:16:51.843 "zcopy": false, 00:16:51.843 "get_zone_info": false, 00:16:51.843 "zone_management": false, 00:16:51.843 "zone_append": false, 00:16:51.843 "compare": false, 00:16:51.843 "compare_and_write": false, 00:16:51.843 "abort": false, 00:16:51.843 "seek_hole": false, 00:16:51.843 "seek_data": false, 00:16:51.843 "copy": false, 00:16:51.843 "nvme_iov_md": false 00:16:51.843 }, 00:16:51.843 "driver_specific": { 00:16:51.843 "ftl": { 00:16:51.843 "base_bdev": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:51.843 "cache": "nvc0n1p0" 00:16:51.843 } 00:16:51.843 } 00:16:51.843 } 00:16:51.843 ] 00:16:51.843 00:48:43 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:51.843 00:48:43 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:51.843 00:48:43 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:52.101 00:48:43 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:52.101 00:48:43 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:52.361 00:48:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:52.361 { 00:16:52.361 "name": "ftl0", 00:16:52.361 "aliases": [ 00:16:52.361 "aa34e40b-328a-44ca-be31-18a041db8592" 00:16:52.361 ], 00:16:52.361 "product_name": "FTL disk", 00:16:52.361 "block_size": 4096, 00:16:52.361 "num_blocks": 23592960, 00:16:52.361 "uuid": "aa34e40b-328a-44ca-be31-18a041db8592", 00:16:52.361 "assigned_rate_limits": { 00:16:52.361 "rw_ios_per_sec": 0, 00:16:52.361 "rw_mbytes_per_sec": 0, 00:16:52.361 "r_mbytes_per_sec": 0, 00:16:52.361 "w_mbytes_per_sec": 0 00:16:52.361 }, 00:16:52.361 "claimed": false, 00:16:52.361 "zoned": false, 00:16:52.361 "supported_io_types": { 00:16:52.361 "read": true, 00:16:52.361 "write": true, 00:16:52.361 "unmap": true, 00:16:52.361 "flush": true, 00:16:52.361 "reset": false, 00:16:52.361 "nvme_admin": false, 00:16:52.361 "nvme_io": false, 00:16:52.361 "nvme_io_md": false, 00:16:52.361 "write_zeroes": true, 00:16:52.361 "zcopy": false, 00:16:52.361 "get_zone_info": false, 00:16:52.361 "zone_management": false, 00:16:52.361 "zone_append": false, 00:16:52.361 "compare": false, 00:16:52.361 "compare_and_write": false, 00:16:52.361 "abort": false, 00:16:52.361 "seek_hole": false, 00:16:52.361 "seek_data": false, 00:16:52.361 "copy": false, 00:16:52.361 "nvme_iov_md": false 00:16:52.361 }, 00:16:52.361 "driver_specific": { 00:16:52.361 "ftl": { 00:16:52.361 "base_bdev": "c8cb3471-64c6-40aa-a5a3-4801d36974c0", 00:16:52.361 "cache": "nvc0n1p0" 00:16:52.361 } 00:16:52.361 } 00:16:52.361 } 00:16:52.361 ]' 00:16:52.361 00:48:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:52.361 00:48:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:52.361 00:48:44 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:52.361 [2024-11-17 00:48:44.388450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.388491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:52.361 [2024-11-17 00:48:44.388506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:52.361 [2024-11-17 00:48:44.388514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.388566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:52.361 [2024-11-17 00:48:44.389006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.389024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:52.361 [2024-11-17 00:48:44.389033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:16:52.361 [2024-11-17 00:48:44.389042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.389644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.389663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:52.361 [2024-11-17 00:48:44.389678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:16:52.361 [2024-11-17 00:48:44.389692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.393350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.393377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:52.361 [2024-11-17 00:48:44.393387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:16:52.361 [2024-11-17 00:48:44.393397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.400319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.400373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:52.361 [2024-11-17 00:48:44.400382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:16:52.361 [2024-11-17 00:48:44.400393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.402127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.402163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:52.361 [2024-11-17 00:48:44.402172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.625 ms 00:16:52.361 [2024-11-17 00:48:44.402181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.406391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.406424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:52.361 [2024-11-17 00:48:44.406434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:16:52.361 [2024-11-17 00:48:44.406443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.406645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.406663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:52.361 [2024-11-17 00:48:44.406674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:52.361 [2024-11-17 00:48:44.406682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.408626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.408659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:52.361 [2024-11-17 00:48:44.408668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:16:52.361 [2024-11-17 00:48:44.408679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.409985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.410018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:52.361 [2024-11-17 00:48:44.410026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:16:52.361 [2024-11-17 00:48:44.410035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.411038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.411071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:52.361 [2024-11-17 00:48:44.411079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:16:52.361 [2024-11-17 00:48:44.411087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.412155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.361 [2024-11-17 00:48:44.412269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:52.361 [2024-11-17 00:48:44.412283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:16:52.361 [2024-11-17 00:48:44.412291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.361 [2024-11-17 00:48:44.412334] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:52.361 [2024-11-17 00:48:44.412350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:52.361 [2024-11-17 00:48:44.412372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:52.361 [2024-11-17 00:48:44.412386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:52.361 [2024-11-17 00:48:44.412393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.412993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:52.362 [2024-11-17 00:48:44.413151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:52.363 [2024-11-17 00:48:44.413225] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:52.363 [2024-11-17 00:48:44.413232] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa34e40b-328a-44ca-be31-18a041db8592 00:16:52.363 [2024-11-17 00:48:44.413242] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:52.363 [2024-11-17 00:48:44.413249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:52.363 [2024-11-17 00:48:44.413257] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:52.363 [2024-11-17 00:48:44.413264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:52.363 [2024-11-17 00:48:44.413274] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:52.363 [2024-11-17 00:48:44.413283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:52.363 [2024-11-17 00:48:44.413291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:52.363 [2024-11-17 00:48:44.413297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:52.363 [2024-11-17 00:48:44.413305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:52.363 [2024-11-17 00:48:44.413312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.363 [2024-11-17 00:48:44.413331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:52.363 [2024-11-17 00:48:44.413339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:16:52.363 [2024-11-17 00:48:44.413349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.363 [2024-11-17 00:48:44.414865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.363 [2024-11-17 00:48:44.414884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:52.363 [2024-11-17 00:48:44.414892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:16:52.363 [2024-11-17 00:48:44.414904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.363 [2024-11-17 00:48:44.414991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.363 [2024-11-17 00:48:44.415001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:52.363 [2024-11-17 00:48:44.415009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:52.363 [2024-11-17 00:48:44.415017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.363 [2024-11-17 00:48:44.420576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.363 [2024-11-17 00:48:44.420675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.363 [2024-11-17 00:48:44.420734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.363 [2024-11-17 00:48:44.420788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.363 [2024-11-17 00:48:44.420897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.363 [2024-11-17 00:48:44.420924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.363 [2024-11-17 00:48:44.420971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.363 [2024-11-17 00:48:44.420996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.363 [2024-11-17 00:48:44.421065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.363 [2024-11-17 00:48:44.421090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.363 [2024-11-17 00:48:44.421110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.363 [2024-11-17 00:48:44.421157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.363 [2024-11-17 00:48:44.421228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.363 [2024-11-17 00:48:44.421252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.363 [2024-11-17 00:48:44.421303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.363 [2024-11-17 00:48:44.421327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.430680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.430805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.622 [2024-11-17 00:48:44.430866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.430946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.438708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.438823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.622 [2024-11-17 00:48:44.438874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.438900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.438977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.439030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.622 [2024-11-17 00:48:44.439054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.439074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.439167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.439192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.622 [2024-11-17 00:48:44.439211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.439244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.439388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.439465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.622 [2024-11-17 00:48:44.439519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.439553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.439625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.439732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:52.622 [2024-11-17 00:48:44.439756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.439777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.439844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.439955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.622 [2024-11-17 00:48:44.439978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.439998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.440068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.622 [2024-11-17 00:48:44.440176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.622 [2024-11-17 00:48:44.440210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.622 [2024-11-17 00:48:44.440231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.622 [2024-11-17 00:48:44.440445] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.981 ms, result 0 00:16:52.622 true 00:16:52.622 00:48:44 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85543 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85543 ']' 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85543 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85543 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85543' 00:16:52.622 killing process with pid 85543 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85543 00:16:52.622 00:48:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85543 00:16:57.889 00:48:49 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:58.148 65536+0 records in 00:16:58.148 65536+0 records out 00:16:58.148 268435456 bytes (268 MB, 256 MiB) copied, 0.819945 s, 327 MB/s 00:16:58.148 00:48:50 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:58.148 [2024-11-17 00:48:50.163875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:58.148 [2024-11-17 00:48:50.163987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85698 ] 00:16:58.409 [2024-11-17 00:48:50.312377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.409 [2024-11-17 00:48:50.355533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.409 [2024-11-17 00:48:50.470070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.409 [2024-11-17 00:48:50.470155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.672 [2024-11-17 00:48:50.630941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.631004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.672 [2024-11-17 00:48:50.631018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:58.672 [2024-11-17 00:48:50.631033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.633578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.633820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.672 [2024-11-17 00:48:50.633845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:16:58.672 [2024-11-17 00:48:50.633854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.634077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.672 [2024-11-17 00:48:50.634393] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.672 [2024-11-17 00:48:50.634417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.634426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.672 [2024-11-17 00:48:50.634442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:16:58.672 [2024-11-17 00:48:50.634452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.636122] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.672 [2024-11-17 00:48:50.639905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.639957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.672 [2024-11-17 00:48:50.639968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.784 ms 00:16:58.672 [2024-11-17 00:48:50.639984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.640060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.640071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.672 [2024-11-17 00:48:50.640080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:58.672 [2024-11-17 00:48:50.640089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.648108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.648299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.672 [2024-11-17 00:48:50.648317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.963 ms 00:16:58.672 [2024-11-17 00:48:50.648325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.648487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.648501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.672 [2024-11-17 00:48:50.648515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:58.672 [2024-11-17 00:48:50.648523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.648563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.648578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.672 [2024-11-17 00:48:50.648586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:58.672 [2024-11-17 00:48:50.648594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.648616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.672 [2024-11-17 00:48:50.650571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.650607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.672 [2024-11-17 00:48:50.650618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.960 ms 00:16:58.672 [2024-11-17 00:48:50.650626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.650674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.672 [2024-11-17 00:48:50.650686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.672 [2024-11-17 00:48:50.650698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:58.672 [2024-11-17 00:48:50.650709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.672 [2024-11-17 00:48:50.650732] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.673 [2024-11-17 00:48:50.650754] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:58.673 [2024-11-17 00:48:50.650792] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.673 [2024-11-17 00:48:50.650808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:58.673 [2024-11-17 00:48:50.650917] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:58.673 [2024-11-17 00:48:50.650931] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.673 [2024-11-17 00:48:50.650942] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:58.673 [2024-11-17 00:48:50.650953] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.673 [2024-11-17 00:48:50.650964] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.673 [2024-11-17 00:48:50.650973] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.673 [2024-11-17 00:48:50.650981] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.673 [2024-11-17 00:48:50.650989] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:58.673 [2024-11-17 00:48:50.650998] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:58.673 [2024-11-17 00:48:50.651007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.673 [2024-11-17 00:48:50.651017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.673 [2024-11-17 00:48:50.651028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:58.673 [2024-11-17 00:48:50.651037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.673 [2024-11-17 00:48:50.651126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.673 [2024-11-17 00:48:50.651136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.673 [2024-11-17 00:48:50.651144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:58.673 [2024-11-17 00:48:50.651155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.673 [2024-11-17 00:48:50.651258] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.673 [2024-11-17 00:48:50.651276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.673 [2024-11-17 00:48:50.651287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.673 [2024-11-17 00:48:50.651298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.673 [2024-11-17 00:48:50.651311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.673 [2024-11-17 00:48:50.651321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.673 [2024-11-17 00:48:50.651329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.673 [2024-11-17 00:48:50.651339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.673 [2024-11-17 00:48:50.651550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.673 [2024-11-17 00:48:50.651597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.673 [2024-11-17 00:48:50.651625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.673 [2024-11-17 00:48:50.651649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.673 [2024-11-17 00:48:50.651672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.673 [2024-11-17 00:48:50.651694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.673 [2024-11-17 00:48:50.651715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:58.673 [2024-11-17 00:48:50.651734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.673 [2024-11-17 00:48:50.651755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.673 [2024-11-17 00:48:50.651823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:58.673 [2024-11-17 00:48:50.651848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.673 [2024-11-17 00:48:50.651867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.673 [2024-11-17 00:48:50.651889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.673 [2024-11-17 00:48:50.651909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.673 [2024-11-17 00:48:50.651929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.673 [2024-11-17 00:48:50.651950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.673 [2024-11-17 00:48:50.651976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.673 [2024-11-17 00:48:50.651996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.673 [2024-11-17 00:48:50.652016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.673 [2024-11-17 00:48:50.652073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.673 [2024-11-17 00:48:50.652097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.673 [2024-11-17 00:48:50.652116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:58.673 [2024-11-17 00:48:50.652137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.673 [2024-11-17 00:48:50.652156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.673 [2024-11-17 00:48:50.652176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:58.673 [2024-11-17 00:48:50.652198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.673 [2024-11-17 00:48:50.652250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.673 [2024-11-17 00:48:50.652273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:58.673 [2024-11-17 00:48:50.652296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.673 [2024-11-17 00:48:50.652315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:58.673 [2024-11-17 00:48:50.652336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:58.673 [2024-11-17 00:48:50.652385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.673 [2024-11-17 00:48:50.652885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:58.673 [2024-11-17 00:48:50.652914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:58.673 [2024-11-17 00:48:50.652924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.673 [2024-11-17 00:48:50.652931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.673 [2024-11-17 00:48:50.652941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.673 [2024-11-17 00:48:50.652950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.673 [2024-11-17 00:48:50.652966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.673 [2024-11-17 00:48:50.652975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.673 [2024-11-17 00:48:50.652983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.673 [2024-11-17 00:48:50.652992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.673 [2024-11-17 00:48:50.653003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.673 [2024-11-17 00:48:50.653010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.673 [2024-11-17 00:48:50.653018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.673 [2024-11-17 00:48:50.653028] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.673 [2024-11-17 00:48:50.653041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.673 [2024-11-17 00:48:50.653051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.673 [2024-11-17 00:48:50.653062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:58.673 [2024-11-17 00:48:50.653075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.673 [2024-11-17 00:48:50.653083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:58.673 [2024-11-17 00:48:50.653091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:58.673 [2024-11-17 00:48:50.653099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:58.673 [2024-11-17 00:48:50.653106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:58.673 [2024-11-17 00:48:50.653115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:58.673 [2024-11-17 00:48:50.653123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:58.673 [2024-11-17 00:48:50.653130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:58.673 [2024-11-17 00:48:50.653137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:58.673 [2024-11-17 00:48:50.653144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:58.673 [2024-11-17 00:48:50.653151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:58.673 [2024-11-17 00:48:50.653160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:58.673 [2024-11-17 00:48:50.653168] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.673 [2024-11-17 00:48:50.653177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.673 [2024-11-17 00:48:50.653186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.673 [2024-11-17 00:48:50.653196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.673 [2024-11-17 00:48:50.653205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.673 [2024-11-17 00:48:50.653213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.673 [2024-11-17 00:48:50.653223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.653237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.674 [2024-11-17 00:48:50.653248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:16:58.674 [2024-11-17 00:48:50.653256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.678412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.678506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.674 [2024-11-17 00:48:50.678553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.054 ms 00:16:58.674 [2024-11-17 00:48:50.678574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.678936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.678984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.674 [2024-11-17 00:48:50.679008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:16:58.674 [2024-11-17 00:48:50.679036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.691670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.691873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.674 [2024-11-17 00:48:50.691892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.570 ms 00:16:58.674 [2024-11-17 00:48:50.691902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.691978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.691989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.674 [2024-11-17 00:48:50.692001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.674 [2024-11-17 00:48:50.692009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.692525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.692566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.674 [2024-11-17 00:48:50.692587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:16:58.674 [2024-11-17 00:48:50.692598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.692758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.692768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.674 [2024-11-17 00:48:50.692779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:58.674 [2024-11-17 00:48:50.692792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.700044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.700098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.674 [2024-11-17 00:48:50.700113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.227 ms 00:16:58.674 [2024-11-17 00:48:50.700125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.703987] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:58.674 [2024-11-17 00:48:50.704043] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.674 [2024-11-17 00:48:50.704062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.704071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.674 [2024-11-17 00:48:50.704080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.847 ms 00:16:58.674 [2024-11-17 00:48:50.704088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.720024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.720071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.674 [2024-11-17 00:48:50.720084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.873 ms 00:16:58.674 [2024-11-17 00:48:50.720092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.723029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.723195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.674 [2024-11-17 00:48:50.723214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:16:58.674 [2024-11-17 00:48:50.723222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.725530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.725575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.674 [2024-11-17 00:48:50.725594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:16:58.674 [2024-11-17 00:48:50.725601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.674 [2024-11-17 00:48:50.725953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.674 [2024-11-17 00:48:50.725973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.674 [2024-11-17 00:48:50.725982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:58.674 [2024-11-17 00:48:50.725994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.935 [2024-11-17 00:48:50.750760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.935 [2024-11-17 00:48:50.750816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.935 [2024-11-17 00:48:50.750831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.743 ms 00:16:58.935 [2024-11-17 00:48:50.750847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.759068] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:58.936 [2024-11-17 00:48:50.778635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.936 [2024-11-17 00:48:50.778684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.936 [2024-11-17 00:48:50.778697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.688 ms 00:16:58.936 [2024-11-17 00:48:50.778707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.778798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.936 [2024-11-17 00:48:50.778810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.936 [2024-11-17 00:48:50.778819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:58.936 [2024-11-17 00:48:50.778828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.778885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.936 [2024-11-17 00:48:50.778900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.936 [2024-11-17 00:48:50.778909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:58.936 [2024-11-17 00:48:50.778918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.778942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.936 [2024-11-17 00:48:50.778951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.936 [2024-11-17 00:48:50.778960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.936 [2024-11-17 00:48:50.778968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.779007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.936 [2024-11-17 00:48:50.779020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.936 [2024-11-17 00:48:50.779030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.936 [2024-11-17 00:48:50.779047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:58.936 [2024-11-17 00:48:50.779055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.785031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.936 [2024-11-17 00:48:50.785250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.936 [2024-11-17 00:48:50.785270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.955 ms 00:16:58.936 [2024-11-17 00:48:50.785279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.785396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.936 [2024-11-17 00:48:50.785407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.936 [2024-11-17 00:48:50.785423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:58.936 [2024-11-17 00:48:50.785431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.936 [2024-11-17 00:48:50.786486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.936 [2024-11-17 00:48:50.787791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.199 ms, result 0 00:16:58.936 [2024-11-17 00:48:50.789206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:58.936 [2024-11-17 00:48:50.796446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.880  [2024-11-17T00:48:52.888Z] Copying: 18/256 [MB] (18 MBps) [2024-11-17T00:48:53.833Z] Copying: 32/256 [MB] (13 MBps) [2024-11-17T00:48:55.220Z] Copying: 45/256 [MB] (13 MBps) [2024-11-17T00:48:56.165Z] Copying: 58/256 [MB] (12 MBps) [2024-11-17T00:48:57.103Z] Copying: 70/256 [MB] (12 MBps) [2024-11-17T00:48:58.037Z] Copying: 83/256 [MB] (12 MBps) [2024-11-17T00:48:58.972Z] Copying: 103/256 [MB] (19 MBps) [2024-11-17T00:48:59.905Z] Copying: 127/256 [MB] (23 MBps) [2024-11-17T00:49:00.924Z] Copying: 145/256 [MB] (18 MBps) [2024-11-17T00:49:01.887Z] Copying: 171/256 [MB] (26 MBps) [2024-11-17T00:49:02.824Z] Copying: 185/256 [MB] (13 MBps) [2024-11-17T00:49:04.204Z] Copying: 204/256 [MB] (19 MBps) [2024-11-17T00:49:05.145Z] Copying: 226/256 [MB] (21 MBps) [2024-11-17T00:49:06.087Z] Copying: 237/256 [MB] (11 MBps) [2024-11-17T00:49:06.350Z] Copying: 250/256 [MB] (12 MBps) [2024-11-17T00:49:06.350Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-17 00:49:06.225540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:14.287 [2024-11-17 00:49:06.227343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.227415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:14.287 [2024-11-17 00:49:06.227430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:14.287 [2024-11-17 00:49:06.227454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.227477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:14.287 [2024-11-17 00:49:06.228161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.228197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:14.287 [2024-11-17 00:49:06.228210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:17:14.287 [2024-11-17 00:49:06.228220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.231505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.231554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:14.287 [2024-11-17 00:49:06.231565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:17:14.287 [2024-11-17 00:49:06.231573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.240585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.240645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:14.287 [2024-11-17 00:49:06.240657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.983 ms 00:17:14.287 [2024-11-17 00:49:06.240665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.247616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.247666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:14.287 [2024-11-17 00:49:06.247678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.902 ms 00:17:14.287 [2024-11-17 00:49:06.247686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.250510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.250722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:14.287 [2024-11-17 00:49:06.250741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:17:14.287 [2024-11-17 00:49:06.250749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.255790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.255849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:14.287 [2024-11-17 00:49:06.255865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.984 ms 00:17:14.287 [2024-11-17 00:49:06.255878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.256011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.256022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:14.287 [2024-11-17 00:49:06.256031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:14.287 [2024-11-17 00:49:06.256040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.287 [2024-11-17 00:49:06.259315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.287 [2024-11-17 00:49:06.259373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:14.287 [2024-11-17 00:49:06.259383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:17:14.287 [2024-11-17 00:49:06.259391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.288 [2024-11-17 00:49:06.262233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.288 [2024-11-17 00:49:06.262439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:14.288 [2024-11-17 00:49:06.262457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:17:14.288 [2024-11-17 00:49:06.262464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.288 [2024-11-17 00:49:06.264910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.288 [2024-11-17 00:49:06.264974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:14.288 [2024-11-17 00:49:06.264987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:17:14.288 [2024-11-17 00:49:06.264995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.288 [2024-11-17 00:49:06.267279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.288 [2024-11-17 00:49:06.267328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:14.288 [2024-11-17 00:49:06.267339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:17:14.288 [2024-11-17 00:49:06.267347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.288 [2024-11-17 00:49:06.267407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:14.288 [2024-11-17 00:49:06.267424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.267999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.268007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.268014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.268023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:14.288 [2024-11-17 00:49:06.268030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:14.289 [2024-11-17 00:49:06.268220] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:14.289 [2024-11-17 00:49:06.268229] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa34e40b-328a-44ca-be31-18a041db8592 00:17:14.289 [2024-11-17 00:49:06.268237] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:14.289 [2024-11-17 00:49:06.268257] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:14.289 [2024-11-17 00:49:06.268265] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:14.289 [2024-11-17 00:49:06.268273] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:14.289 [2024-11-17 00:49:06.268281] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:14.289 [2024-11-17 00:49:06.268289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:14.289 [2024-11-17 00:49:06.268297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:14.289 [2024-11-17 00:49:06.268303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:14.289 [2024-11-17 00:49:06.268309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:14.289 [2024-11-17 00:49:06.268316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.289 [2024-11-17 00:49:06.268326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:14.289 [2024-11-17 00:49:06.268335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:17:14.289 [2024-11-17 00:49:06.268345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.270960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.289 [2024-11-17 00:49:06.271109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:14.289 [2024-11-17 00:49:06.271172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:17:14.289 [2024-11-17 00:49:06.271198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.271336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.289 [2024-11-17 00:49:06.271390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:14.289 [2024-11-17 00:49:06.271574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:14.289 [2024-11-17 00:49:06.271598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.278954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.279113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:14.289 [2024-11-17 00:49:06.279168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.279193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.279296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.279323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:14.289 [2024-11-17 00:49:06.279347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.279388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.279635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.279687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:14.289 [2024-11-17 00:49:06.279709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.279728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.279758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.279778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:14.289 [2024-11-17 00:49:06.279797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.279920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.293211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.293402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:14.289 [2024-11-17 00:49:06.293460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.293483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.303467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.303630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:14.289 [2024-11-17 00:49:06.303694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.303718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.303779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.303805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.289 [2024-11-17 00:49:06.303826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.303846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.303888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.303909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.289 [2024-11-17 00:49:06.303930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.303994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.304100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.304127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.289 [2024-11-17 00:49:06.304156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.304175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.304227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.304253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:14.289 [2024-11-17 00:49:06.304273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.304292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.304369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.304450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.289 [2024-11-17 00:49:06.304521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.304543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.304647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.289 [2024-11-17 00:49:06.304676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.289 [2024-11-17 00:49:06.304705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.289 [2024-11-17 00:49:06.304726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.289 [2024-11-17 00:49:06.304891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 77.521 ms, result 0 00:17:14.862 00:17:14.862 00:17:14.862 00:49:06 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85881 00:17:14.862 00:49:06 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85881 00:17:14.862 00:49:06 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85881 ']' 00:17:14.862 00:49:06 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.863 00:49:06 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:14.863 00:49:06 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.863 00:49:06 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.863 00:49:06 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.863 00:49:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:14.863 [2024-11-17 00:49:06.890609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:14.863 [2024-11-17 00:49:06.891031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85881 ] 00:17:15.124 [2024-11-17 00:49:07.045962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.124 [2024-11-17 00:49:07.096624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.695 00:49:07 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.695 00:49:07 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:15.695 00:49:07 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:15.955 [2024-11-17 00:49:07.945544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.955 [2024-11-17 00:49:07.945621] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:16.217 [2024-11-17 00:49:08.123089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.217 [2024-11-17 00:49:08.123154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:16.217 [2024-11-17 00:49:08.123170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:16.217 [2024-11-17 00:49:08.123181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.217 [2024-11-17 00:49:08.125759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.217 [2024-11-17 00:49:08.125965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.217 [2024-11-17 00:49:08.125986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:17:16.218 [2024-11-17 00:49:08.125997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.126205] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:16.218 [2024-11-17 00:49:08.126551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:16.218 [2024-11-17 00:49:08.126577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.126589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.218 [2024-11-17 00:49:08.126600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:17:16.218 [2024-11-17 00:49:08.126611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.128730] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:16.218 [2024-11-17 00:49:08.132653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.132708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:16.218 [2024-11-17 00:49:08.132725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.919 ms 00:17:16.218 [2024-11-17 00:49:08.132734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.132820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.132834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:16.218 [2024-11-17 00:49:08.132848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:16.218 [2024-11-17 00:49:08.132857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.140962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.141004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.218 [2024-11-17 00:49:08.141017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.051 ms 00:17:16.218 [2024-11-17 00:49:08.141026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.141149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.141160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.218 [2024-11-17 00:49:08.141171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:16.218 [2024-11-17 00:49:08.141179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.141211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.141221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:16.218 [2024-11-17 00:49:08.141234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:16.218 [2024-11-17 00:49:08.141245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.141274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:16.218 [2024-11-17 00:49:08.143334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.143415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.218 [2024-11-17 00:49:08.143426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:17:16.218 [2024-11-17 00:49:08.143441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.143485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.143496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:16.218 [2024-11-17 00:49:08.143505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:16.218 [2024-11-17 00:49:08.143514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.143547] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:16.218 [2024-11-17 00:49:08.143573] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:16.218 [2024-11-17 00:49:08.143618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:16.218 [2024-11-17 00:49:08.143640] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:16.218 [2024-11-17 00:49:08.143748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:16.218 [2024-11-17 00:49:08.143763] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:16.218 [2024-11-17 00:49:08.143774] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:16.218 [2024-11-17 00:49:08.143787] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:16.218 [2024-11-17 00:49:08.143798] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:16.218 [2024-11-17 00:49:08.143814] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:16.218 [2024-11-17 00:49:08.143822] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:16.218 [2024-11-17 00:49:08.143832] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:16.218 [2024-11-17 00:49:08.143841] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:16.218 [2024-11-17 00:49:08.143852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.143862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:16.218 [2024-11-17 00:49:08.143872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:16.218 [2024-11-17 00:49:08.143881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.143970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.218 [2024-11-17 00:49:08.143981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:16.218 [2024-11-17 00:49:08.143992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:16.218 [2024-11-17 00:49:08.144000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.218 [2024-11-17 00:49:08.144106] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:16.218 [2024-11-17 00:49:08.144119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:16.218 [2024-11-17 00:49:08.144133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.218 [2024-11-17 00:49:08.144143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:16.218 [2024-11-17 00:49:08.144166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:16.218 [2024-11-17 00:49:08.144184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:16.218 [2024-11-17 00:49:08.144204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.218 [2024-11-17 00:49:08.144223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:16.218 [2024-11-17 00:49:08.144231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:16.218 [2024-11-17 00:49:08.144244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.218 [2024-11-17 00:49:08.144254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:16.218 [2024-11-17 00:49:08.144264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:16.218 [2024-11-17 00:49:08.144271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:16.218 [2024-11-17 00:49:08.144292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:16.218 [2024-11-17 00:49:08.144303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:16.218 [2024-11-17 00:49:08.144327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.218 [2024-11-17 00:49:08.144347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:16.218 [2024-11-17 00:49:08.144372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.218 [2024-11-17 00:49:08.144390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:16.218 [2024-11-17 00:49:08.144402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.218 [2024-11-17 00:49:08.144421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:16.218 [2024-11-17 00:49:08.144428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.218 [2024-11-17 00:49:08.144446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:16.218 [2024-11-17 00:49:08.144455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.218 [2024-11-17 00:49:08.144474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:16.218 [2024-11-17 00:49:08.144482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:16.218 [2024-11-17 00:49:08.144495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.218 [2024-11-17 00:49:08.144503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:16.218 [2024-11-17 00:49:08.144513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:16.218 [2024-11-17 00:49:08.144520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:16.218 [2024-11-17 00:49:08.144535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:16.218 [2024-11-17 00:49:08.144545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.218 [2024-11-17 00:49:08.144552] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:16.218 [2024-11-17 00:49:08.144579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:16.219 [2024-11-17 00:49:08.144591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.219 [2024-11-17 00:49:08.144600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.219 [2024-11-17 00:49:08.144610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:16.219 [2024-11-17 00:49:08.144620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:16.219 [2024-11-17 00:49:08.144626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:16.219 [2024-11-17 00:49:08.144635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:16.219 [2024-11-17 00:49:08.144643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:16.219 [2024-11-17 00:49:08.144656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:16.219 [2024-11-17 00:49:08.144666] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:16.219 [2024-11-17 00:49:08.144678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.219 [2024-11-17 00:49:08.144687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:16.219 [2024-11-17 00:49:08.144697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:16.219 [2024-11-17 00:49:08.144705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:16.219 [2024-11-17 00:49:08.144714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:16.219 [2024-11-17 00:49:08.144721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:16.219 [2024-11-17 00:49:08.144733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:16.219 [2024-11-17 00:49:08.144741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:16.219 [2024-11-17 00:49:08.144750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:16.219 [2024-11-17 00:49:08.144757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:16.219 [2024-11-17 00:49:08.144766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:16.219 [2024-11-17 00:49:08.144773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:16.219 [2024-11-17 00:49:08.144783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:16.219 [2024-11-17 00:49:08.144792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:16.219 [2024-11-17 00:49:08.144804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:16.219 [2024-11-17 00:49:08.144811] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:16.219 [2024-11-17 00:49:08.144821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.219 [2024-11-17 00:49:08.144833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:16.219 [2024-11-17 00:49:08.144843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:16.219 [2024-11-17 00:49:08.144850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:16.219 [2024-11-17 00:49:08.144859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:16.219 [2024-11-17 00:49:08.144867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.144882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:16.219 [2024-11-17 00:49:08.144890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:17:16.219 [2024-11-17 00:49:08.144899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.158741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.158959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.219 [2024-11-17 00:49:08.158980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.784 ms 00:17:16.219 [2024-11-17 00:49:08.158991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.159127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.159145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:16.219 [2024-11-17 00:49:08.159158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:16.219 [2024-11-17 00:49:08.159169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.171235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.171285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.219 [2024-11-17 00:49:08.171296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.043 ms 00:17:16.219 [2024-11-17 00:49:08.171307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.171393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.171410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.219 [2024-11-17 00:49:08.171418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:16.219 [2024-11-17 00:49:08.171428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.171921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.171966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.219 [2024-11-17 00:49:08.171978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:17:16.219 [2024-11-17 00:49:08.171995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.172149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.172163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.219 [2024-11-17 00:49:08.172177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:16.219 [2024-11-17 00:49:08.172188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.193038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.193110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.219 [2024-11-17 00:49:08.193128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.822 ms 00:17:16.219 [2024-11-17 00:49:08.193148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.197342] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:16.219 [2024-11-17 00:49:08.197423] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:16.219 [2024-11-17 00:49:08.197442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.197458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:16.219 [2024-11-17 00:49:08.197471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:17:16.219 [2024-11-17 00:49:08.197485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.213631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.213683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:16.219 [2024-11-17 00:49:08.213696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:17:16.219 [2024-11-17 00:49:08.213710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.216509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.216574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:16.219 [2024-11-17 00:49:08.216585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:17:16.219 [2024-11-17 00:49:08.216595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.219308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.219384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:16.219 [2024-11-17 00:49:08.219395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:17:16.219 [2024-11-17 00:49:08.219405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.219770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.219789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:16.219 [2024-11-17 00:49:08.219800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:16.219 [2024-11-17 00:49:08.219809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.243995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.244060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:16.219 [2024-11-17 00:49:08.244073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.161 ms 00:17:16.219 [2024-11-17 00:49:08.244087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.252297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:16.219 [2024-11-17 00:49:08.271071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.271311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:16.219 [2024-11-17 00:49:08.271337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.888 ms 00:17:16.219 [2024-11-17 00:49:08.271346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.271462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.271474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:16.219 [2024-11-17 00:49:08.271492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:16.219 [2024-11-17 00:49:08.271503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.219 [2024-11-17 00:49:08.271565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.219 [2024-11-17 00:49:08.271575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:16.219 [2024-11-17 00:49:08.271589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:16.220 [2024-11-17 00:49:08.271597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.220 [2024-11-17 00:49:08.271627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.220 [2024-11-17 00:49:08.271636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:16.220 [2024-11-17 00:49:08.271651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:16.220 [2024-11-17 00:49:08.271662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.220 [2024-11-17 00:49:08.271702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:16.220 [2024-11-17 00:49:08.271712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.220 [2024-11-17 00:49:08.271722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:16.220 [2024-11-17 00:49:08.271731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:16.220 [2024-11-17 00:49:08.271740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.220 [2024-11-17 00:49:08.278129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.481 [2024-11-17 00:49:08.278306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:16.481 [2024-11-17 00:49:08.278326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.366 ms 00:17:16.481 [2024-11-17 00:49:08.278337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.481 [2024-11-17 00:49:08.278448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.481 [2024-11-17 00:49:08.278462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:16.481 [2024-11-17 00:49:08.278472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:16.481 [2024-11-17 00:49:08.278482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.481 [2024-11-17 00:49:08.279566] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:16.481 [2024-11-17 00:49:08.280950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.122 ms, result 0 00:17:16.481 [2024-11-17 00:49:08.282821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.481 Some configs were skipped because the RPC state that can call them passed over. 00:17:16.481 00:49:08 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:16.481 [2024-11-17 00:49:08.524888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.481 [2024-11-17 00:49:08.525063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:16.481 [2024-11-17 00:49:08.525132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:17:16.481 [2024-11-17 00:49:08.525158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.481 [2024-11-17 00:49:08.525214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.642 ms, result 0 00:17:16.481 true 00:17:16.481 00:49:08 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:16.743 [2024-11-17 00:49:08.740502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.743 [2024-11-17 00:49:08.740579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:16.743 [2024-11-17 00:49:08.740592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.644 ms 00:17:16.743 [2024-11-17 00:49:08.740603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.743 [2024-11-17 00:49:08.740641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.782 ms, result 0 00:17:16.743 true 00:17:16.743 00:49:08 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85881 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85881 ']' 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85881 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85881 00:17:16.743 killing process with pid 85881 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85881' 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85881 00:17:16.743 00:49:08 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85881 00:17:17.006 [2024-11-17 00:49:08.908816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.908875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:17.006 [2024-11-17 00:49:08.908891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.006 [2024-11-17 00:49:08.908899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.908926] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:17.006 [2024-11-17 00:49:08.909454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.909479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:17.006 [2024-11-17 00:49:08.909488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:17:17.006 [2024-11-17 00:49:08.909503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.909798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.909812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:17.006 [2024-11-17 00:49:08.909821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:17.006 [2024-11-17 00:49:08.909832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.914507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.914708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:17.006 [2024-11-17 00:49:08.914726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:17:17.006 [2024-11-17 00:49:08.914736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.921652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.921800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:17.006 [2024-11-17 00:49:08.921818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:17:17.006 [2024-11-17 00:49:08.921830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.924318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.924397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:17.006 [2024-11-17 00:49:08.924408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:17:17.006 [2024-11-17 00:49:08.924417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.928940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.929093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:17.006 [2024-11-17 00:49:08.929111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.479 ms 00:17:17.006 [2024-11-17 00:49:08.929121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.929254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.929273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:17.006 [2024-11-17 00:49:08.929282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:17.006 [2024-11-17 00:49:08.929294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.932694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.006 [2024-11-17 00:49:08.932739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:17.006 [2024-11-17 00:49:08.932749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:17:17.006 [2024-11-17 00:49:08.932762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.006 [2024-11-17 00:49:08.935390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.007 [2024-11-17 00:49:08.935538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:17.007 [2024-11-17 00:49:08.935554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:17:17.007 [2024-11-17 00:49:08.935564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.007 [2024-11-17 00:49:08.938092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.007 [2024-11-17 00:49:08.938142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:17.007 [2024-11-17 00:49:08.938153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:17:17.007 [2024-11-17 00:49:08.938162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.007 [2024-11-17 00:49:08.940223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.007 [2024-11-17 00:49:08.940269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:17.007 [2024-11-17 00:49:08.940280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.979 ms 00:17:17.007 [2024-11-17 00:49:08.940289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.007 [2024-11-17 00:49:08.940328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:17.007 [2024-11-17 00:49:08.940346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.940990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:17.007 [2024-11-17 00:49:08.941119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:17.008 [2024-11-17 00:49:08.941367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:17.008 [2024-11-17 00:49:08.941376] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa34e40b-328a-44ca-be31-18a041db8592 00:17:17.008 [2024-11-17 00:49:08.941386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:17.008 [2024-11-17 00:49:08.941395] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:17.008 [2024-11-17 00:49:08.941405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:17.008 [2024-11-17 00:49:08.941416] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:17.008 [2024-11-17 00:49:08.941425] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:17.008 [2024-11-17 00:49:08.941433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:17.008 [2024-11-17 00:49:08.941448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:17.008 [2024-11-17 00:49:08.941455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:17.008 [2024-11-17 00:49:08.941463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:17.008 [2024-11-17 00:49:08.941471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.008 [2024-11-17 00:49:08.941480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:17.008 [2024-11-17 00:49:08.941490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:17:17.008 [2024-11-17 00:49:08.941501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.943460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.008 [2024-11-17 00:49:08.943496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:17.008 [2024-11-17 00:49:08.943512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:17:17.008 [2024-11-17 00:49:08.943523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.943662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.008 [2024-11-17 00:49:08.943676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:17.008 [2024-11-17 00:49:08.943686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:17.008 [2024-11-17 00:49:08.943697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.950686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.950730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.008 [2024-11-17 00:49:08.950740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.950754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.950824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.950836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.008 [2024-11-17 00:49:08.950844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.950857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.950901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.950917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.008 [2024-11-17 00:49:08.950927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.950938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.950958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.950971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.008 [2024-11-17 00:49:08.950979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.950993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.963589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.963640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.008 [2024-11-17 00:49:08.963651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.963661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.973297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.973349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.008 [2024-11-17 00:49:08.973601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.973618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.973664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.973678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.008 [2024-11-17 00:49:08.973687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.973701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.973757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.973770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.008 [2024-11-17 00:49:08.973779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.973789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.973870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.973884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.008 [2024-11-17 00:49:08.973896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.973909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.973947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.973960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:17.008 [2024-11-17 00:49:08.973970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.973986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.974025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.974038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.008 [2024-11-17 00:49:08.974046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.974056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.974105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.008 [2024-11-17 00:49:08.974117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.008 [2024-11-17 00:49:08.974126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.008 [2024-11-17 00:49:08.974137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.008 [2024-11-17 00:49:08.974278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.437 ms, result 0 00:17:17.279 00:49:09 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:17.279 00:49:09 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:17.279 [2024-11-17 00:49:09.287319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:17.279 [2024-11-17 00:49:09.287481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85918 ] 00:17:17.541 [2024-11-17 00:49:09.440653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.541 [2024-11-17 00:49:09.477587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.541 [2024-11-17 00:49:09.590284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.541 [2024-11-17 00:49:09.590392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.803 [2024-11-17 00:49:09.752163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.752414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:17.803 [2024-11-17 00:49:09.752440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.803 [2024-11-17 00:49:09.752450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.755007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.755059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.803 [2024-11-17 00:49:09.755074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.530 ms 00:17:17.803 [2024-11-17 00:49:09.755082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.755185] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:17.803 [2024-11-17 00:49:09.755476] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:17.803 [2024-11-17 00:49:09.755496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.755504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.803 [2024-11-17 00:49:09.755546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:17.803 [2024-11-17 00:49:09.755555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.759154] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:17.803 [2024-11-17 00:49:09.765266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.765326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:17.803 [2024-11-17 00:49:09.765338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.121 ms 00:17:17.803 [2024-11-17 00:49:09.765351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.765480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.765492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:17.803 [2024-11-17 00:49:09.765503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:17.803 [2024-11-17 00:49:09.765516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.773677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.773722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.803 [2024-11-17 00:49:09.773737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.113 ms 00:17:17.803 [2024-11-17 00:49:09.773746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.773888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.773901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.803 [2024-11-17 00:49:09.773911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:17.803 [2024-11-17 00:49:09.773919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.773950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.773963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:17.803 [2024-11-17 00:49:09.773972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:17.803 [2024-11-17 00:49:09.773981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.774004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:17.803 [2024-11-17 00:49:09.776159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.776208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.803 [2024-11-17 00:49:09.776219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:17:17.803 [2024-11-17 00:49:09.776228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.776283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.776299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:17.803 [2024-11-17 00:49:09.776308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:17.803 [2024-11-17 00:49:09.776316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.776340] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:17.803 [2024-11-17 00:49:09.776380] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:17.803 [2024-11-17 00:49:09.776418] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:17.803 [2024-11-17 00:49:09.776436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:17.803 [2024-11-17 00:49:09.776545] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:17.803 [2024-11-17 00:49:09.776594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:17.803 [2024-11-17 00:49:09.776614] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:17.803 [2024-11-17 00:49:09.776625] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:17.803 [2024-11-17 00:49:09.776635] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:17.803 [2024-11-17 00:49:09.776646] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:17.803 [2024-11-17 00:49:09.776655] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:17.803 [2024-11-17 00:49:09.776666] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:17.803 [2024-11-17 00:49:09.776674] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:17.803 [2024-11-17 00:49:09.776687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.776698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:17.803 [2024-11-17 00:49:09.776709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:17:17.803 [2024-11-17 00:49:09.776719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.776813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.803 [2024-11-17 00:49:09.776829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:17.803 [2024-11-17 00:49:09.776839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:17.803 [2024-11-17 00:49:09.776847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.803 [2024-11-17 00:49:09.776952] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:17.803 [2024-11-17 00:49:09.776985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:17.803 [2024-11-17 00:49:09.776998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.803 [2024-11-17 00:49:09.777021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.803 [2024-11-17 00:49:09.777034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:17.803 [2024-11-17 00:49:09.777048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:17.803 [2024-11-17 00:49:09.777059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:17.803 [2024-11-17 00:49:09.777068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:17.803 [2024-11-17 00:49:09.777080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:17.803 [2024-11-17 00:49:09.777088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.803 [2024-11-17 00:49:09.777097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:17.804 [2024-11-17 00:49:09.777106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:17.804 [2024-11-17 00:49:09.777116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.804 [2024-11-17 00:49:09.777125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:17.804 [2024-11-17 00:49:09.777133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:17.804 [2024-11-17 00:49:09.777141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:17.804 [2024-11-17 00:49:09.777159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:17.804 [2024-11-17 00:49:09.777170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:17.804 [2024-11-17 00:49:09.777190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.804 [2024-11-17 00:49:09.777207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:17.804 [2024-11-17 00:49:09.777217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.804 [2024-11-17 00:49:09.777237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:17.804 [2024-11-17 00:49:09.777244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.804 [2024-11-17 00:49:09.777260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:17.804 [2024-11-17 00:49:09.777267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.804 [2024-11-17 00:49:09.777284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:17.804 [2024-11-17 00:49:09.777299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.804 [2024-11-17 00:49:09.777320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:17.804 [2024-11-17 00:49:09.777337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:17.804 [2024-11-17 00:49:09.777350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.804 [2024-11-17 00:49:09.777383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:17.804 [2024-11-17 00:49:09.777391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:17.804 [2024-11-17 00:49:09.777398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:17.804 [2024-11-17 00:49:09.777417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:17.804 [2024-11-17 00:49:09.777424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777432] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:17.804 [2024-11-17 00:49:09.777440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:17.804 [2024-11-17 00:49:09.777453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.804 [2024-11-17 00:49:09.777463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.804 [2024-11-17 00:49:09.777474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:17.804 [2024-11-17 00:49:09.777481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:17.804 [2024-11-17 00:49:09.777490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:17.804 [2024-11-17 00:49:09.777498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:17.804 [2024-11-17 00:49:09.777505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:17.804 [2024-11-17 00:49:09.777513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:17.804 [2024-11-17 00:49:09.777523] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:17.804 [2024-11-17 00:49:09.777533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.804 [2024-11-17 00:49:09.777547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:17.804 [2024-11-17 00:49:09.777557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:17.804 [2024-11-17 00:49:09.777564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:17.804 [2024-11-17 00:49:09.777572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:17.804 [2024-11-17 00:49:09.777580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:17.804 [2024-11-17 00:49:09.777589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:17.804 [2024-11-17 00:49:09.777597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:17.804 [2024-11-17 00:49:09.777605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:17.804 [2024-11-17 00:49:09.777612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:17.804 [2024-11-17 00:49:09.777619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:17.804 [2024-11-17 00:49:09.777629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:17.804 [2024-11-17 00:49:09.777637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:17.804 [2024-11-17 00:49:09.777644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:17.804 [2024-11-17 00:49:09.777652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:17.804 [2024-11-17 00:49:09.777661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:17.804 [2024-11-17 00:49:09.777670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.804 [2024-11-17 00:49:09.777679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:17.804 [2024-11-17 00:49:09.777689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:17.804 [2024-11-17 00:49:09.777696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:17.804 [2024-11-17 00:49:09.777706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:17.804 [2024-11-17 00:49:09.777714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.777722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:17.804 [2024-11-17 00:49:09.777732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:17:17.804 [2024-11-17 00:49:09.777743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.800282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.800345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.804 [2024-11-17 00:49:09.800383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.436 ms 00:17:17.804 [2024-11-17 00:49:09.800394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.800553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.800607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:17.804 [2024-11-17 00:49:09.800617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:17.804 [2024-11-17 00:49:09.800632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.813201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.813248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.804 [2024-11-17 00:49:09.813260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.544 ms 00:17:17.804 [2024-11-17 00:49:09.813272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.813348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.813388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.804 [2024-11-17 00:49:09.813401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:17.804 [2024-11-17 00:49:09.813409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.813861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.813897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.804 [2024-11-17 00:49:09.813910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:17:17.804 [2024-11-17 00:49:09.813920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.814075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.814090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.804 [2024-11-17 00:49:09.814099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:17.804 [2024-11-17 00:49:09.814111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.821202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.804 [2024-11-17 00:49:09.821255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.804 [2024-11-17 00:49:09.821267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.067 ms 00:17:17.804 [2024-11-17 00:49:09.821278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.804 [2024-11-17 00:49:09.825281] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:17.805 [2024-11-17 00:49:09.825346] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:17.805 [2024-11-17 00:49:09.825399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.805 [2024-11-17 00:49:09.825409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:17.805 [2024-11-17 00:49:09.825419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:17:17.805 [2024-11-17 00:49:09.825427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.805 [2024-11-17 00:49:09.841496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.805 [2024-11-17 00:49:09.841545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:17.805 [2024-11-17 00:49:09.841558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.982 ms 00:17:17.805 [2024-11-17 00:49:09.841567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.805 [2024-11-17 00:49:09.844657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.805 [2024-11-17 00:49:09.844704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:17.805 [2024-11-17 00:49:09.844716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:17:17.805 [2024-11-17 00:49:09.844723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.805 [2024-11-17 00:49:09.847523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.805 [2024-11-17 00:49:09.847579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:17.805 [2024-11-17 00:49:09.847590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:17:17.805 [2024-11-17 00:49:09.847598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.805 [2024-11-17 00:49:09.847946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.805 [2024-11-17 00:49:09.847960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:17.805 [2024-11-17 00:49:09.847973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:17.805 [2024-11-17 00:49:09.847980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.871713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.871775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:18.066 [2024-11-17 00:49:09.871788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.710 ms 00:17:18.066 [2024-11-17 00:49:09.871797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.880094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:18.066 [2024-11-17 00:49:09.899611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.899661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:18.066 [2024-11-17 00:49:09.899674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.718 ms 00:17:18.066 [2024-11-17 00:49:09.899684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.899781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.899795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:18.066 [2024-11-17 00:49:09.899806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:18.066 [2024-11-17 00:49:09.899814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.899874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.899885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.066 [2024-11-17 00:49:09.899894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:18.066 [2024-11-17 00:49:09.899902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.899925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.899934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.066 [2024-11-17 00:49:09.899943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:18.066 [2024-11-17 00:49:09.899951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.899987] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:18.066 [2024-11-17 00:49:09.900002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.900010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:18.066 [2024-11-17 00:49:09.900018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:18.066 [2024-11-17 00:49:09.900025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.906053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.906102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.066 [2024-11-17 00:49:09.906114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.007 ms 00:17:18.066 [2024-11-17 00:49:09.906122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.906219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.066 [2024-11-17 00:49:09.906233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.066 [2024-11-17 00:49:09.906242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:18.066 [2024-11-17 00:49:09.906250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.066 [2024-11-17 00:49:09.907266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.066 [2024-11-17 00:49:09.908675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.783 ms, result 0 00:17:18.066 [2024-11-17 00:49:09.910048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.066 [2024-11-17 00:49:09.917317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.012  [2024-11-17T00:49:12.019Z] Copying: 17/256 [MB] (17 MBps) [2024-11-17T00:49:12.962Z] Copying: 32/256 [MB] (14 MBps) [2024-11-17T00:49:14.353Z] Copying: 47/256 [MB] (14 MBps) [2024-11-17T00:49:14.928Z] Copying: 58/256 [MB] (11 MBps) [2024-11-17T00:49:16.318Z] Copying: 70/256 [MB] (12 MBps) [2024-11-17T00:49:17.262Z] Copying: 81/256 [MB] (10 MBps) [2024-11-17T00:49:18.207Z] Copying: 91/256 [MB] (10 MBps) [2024-11-17T00:49:19.152Z] Copying: 101/256 [MB] (10 MBps) [2024-11-17T00:49:20.097Z] Copying: 112/256 [MB] (10 MBps) [2024-11-17T00:49:21.043Z] Copying: 122/256 [MB] (10 MBps) [2024-11-17T00:49:21.989Z] Copying: 133/256 [MB] (10 MBps) [2024-11-17T00:49:22.935Z] Copying: 143/256 [MB] (10 MBps) [2024-11-17T00:49:24.325Z] Copying: 154/256 [MB] (10 MBps) [2024-11-17T00:49:25.268Z] Copying: 164/256 [MB] (10 MBps) [2024-11-17T00:49:26.215Z] Copying: 182/256 [MB] (17 MBps) [2024-11-17T00:49:27.161Z] Copying: 196/256 [MB] (13 MBps) [2024-11-17T00:49:28.106Z] Copying: 210/256 [MB] (14 MBps) [2024-11-17T00:49:29.143Z] Copying: 224/256 [MB] (13 MBps) [2024-11-17T00:49:29.722Z] Copying: 240/256 [MB] (15 MBps) [2024-11-17T00:49:29.722Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-17 00:49:29.612853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.659 [2024-11-17 00:49:29.614594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.614638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:37.659 [2024-11-17 00:49:29.614654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.659 [2024-11-17 00:49:29.614663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.614684] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:37.659 [2024-11-17 00:49:29.615238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.615257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:37.659 [2024-11-17 00:49:29.615267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:37.659 [2024-11-17 00:49:29.615275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.615549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.615680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:37.659 [2024-11-17 00:49:29.615695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:37.659 [2024-11-17 00:49:29.615703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.619426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.619448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:37.659 [2024-11-17 00:49:29.619459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.698 ms 00:17:37.659 [2024-11-17 00:49:29.619466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.626812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.626959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:37.659 [2024-11-17 00:49:29.626976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.314 ms 00:17:37.659 [2024-11-17 00:49:29.626984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.629714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.629755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:37.659 [2024-11-17 00:49:29.629765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:17:37.659 [2024-11-17 00:49:29.629783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.634746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.634792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:37.659 [2024-11-17 00:49:29.634809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:17:37.659 [2024-11-17 00:49:29.634817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.634942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.634952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:37.659 [2024-11-17 00:49:29.634961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:37.659 [2024-11-17 00:49:29.634969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.638243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.638288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:37.659 [2024-11-17 00:49:29.638297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:17:37.659 [2024-11-17 00:49:29.638303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.640777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.640934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:37.659 [2024-11-17 00:49:29.640950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:17:37.659 [2024-11-17 00:49:29.640957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-11-17 00:49:29.643053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-11-17 00:49:29.643094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:37.660 [2024-11-17 00:49:29.643104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:17:37.660 [2024-11-17 00:49:29.643110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.660 [2024-11-17 00:49:29.645347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.660 [2024-11-17 00:49:29.645399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:37.660 [2024-11-17 00:49:29.645409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:17:37.660 [2024-11-17 00:49:29.645416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.660 [2024-11-17 00:49:29.645454] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:37.660 [2024-11-17 00:49:29.645476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:37.660 [2024-11-17 00:49:29.645962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.645970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.645977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.645984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.645991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.645999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:37.661 [2024-11-17 00:49:29.646236] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:37.661 [2024-11-17 00:49:29.646250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa34e40b-328a-44ca-be31-18a041db8592 00:17:37.661 [2024-11-17 00:49:29.646266] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:37.661 [2024-11-17 00:49:29.646273] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:37.661 [2024-11-17 00:49:29.646280] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:37.661 [2024-11-17 00:49:29.646288] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:37.661 [2024-11-17 00:49:29.646296] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:37.661 [2024-11-17 00:49:29.646303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:37.661 [2024-11-17 00:49:29.646312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:37.661 [2024-11-17 00:49:29.646318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:37.661 [2024-11-17 00:49:29.646324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:37.661 [2024-11-17 00:49:29.646331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.661 [2024-11-17 00:49:29.646338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:37.661 [2024-11-17 00:49:29.646349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:17:37.661 [2024-11-17 00:49:29.646375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.648424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.661 [2024-11-17 00:49:29.648450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:37.661 [2024-11-17 00:49:29.648460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:17:37.661 [2024-11-17 00:49:29.648468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.648630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.661 [2024-11-17 00:49:29.648644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:37.661 [2024-11-17 00:49:29.648653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:37.661 [2024-11-17 00:49:29.648661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.655291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.661 [2024-11-17 00:49:29.655337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.661 [2024-11-17 00:49:29.655347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.661 [2024-11-17 00:49:29.655375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.655463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.661 [2024-11-17 00:49:29.655495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.661 [2024-11-17 00:49:29.655504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.661 [2024-11-17 00:49:29.655511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.655555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.661 [2024-11-17 00:49:29.655564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.661 [2024-11-17 00:49:29.655572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.661 [2024-11-17 00:49:29.655580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.655597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.661 [2024-11-17 00:49:29.655606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.661 [2024-11-17 00:49:29.655616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.661 [2024-11-17 00:49:29.655628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.669056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.661 [2024-11-17 00:49:29.669121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.661 [2024-11-17 00:49:29.669133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.661 [2024-11-17 00:49:29.669141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.680423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.661 [2024-11-17 00:49:29.680638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.661 [2024-11-17 00:49:29.680656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.661 [2024-11-17 00:49:29.680666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.661 [2024-11-17 00:49:29.680721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.662 [2024-11-17 00:49:29.680739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.662 [2024-11-17 00:49:29.680748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.662 [2024-11-17 00:49:29.680757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.662 [2024-11-17 00:49:29.680790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.662 [2024-11-17 00:49:29.680799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.662 [2024-11-17 00:49:29.680808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.662 [2024-11-17 00:49:29.680824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.662 [2024-11-17 00:49:29.680908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.662 [2024-11-17 00:49:29.680923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.662 [2024-11-17 00:49:29.680931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.662 [2024-11-17 00:49:29.680940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.662 [2024-11-17 00:49:29.680975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.662 [2024-11-17 00:49:29.680985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:37.662 [2024-11-17 00:49:29.680993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.662 [2024-11-17 00:49:29.681001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.662 [2024-11-17 00:49:29.681050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.662 [2024-11-17 00:49:29.681060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.662 [2024-11-17 00:49:29.681069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.662 [2024-11-17 00:49:29.681081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.662 [2024-11-17 00:49:29.681136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.662 [2024-11-17 00:49:29.681148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.662 [2024-11-17 00:49:29.681157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.662 [2024-11-17 00:49:29.681168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.662 [2024-11-17 00:49:29.681316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.692 ms, result 0 00:17:37.923 00:17:37.923 00:17:37.923 00:49:29 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:37.923 00:49:29 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:38.494 00:49:30 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:38.756 [2024-11-17 00:49:30.561908] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:38.756 [2024-11-17 00:49:30.562066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86148 ] 00:17:38.756 [2024-11-17 00:49:30.713784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.756 [2024-11-17 00:49:30.763823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.019 [2024-11-17 00:49:30.880777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.019 [2024-11-17 00:49:30.880862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.019 [2024-11-17 00:49:31.043016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.043457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:39.019 [2024-11-17 00:49:31.043511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:39.019 [2024-11-17 00:49:31.043527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.046127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.046177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.019 [2024-11-17 00:49:31.046191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:17:39.019 [2024-11-17 00:49:31.046202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.046338] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:39.019 [2024-11-17 00:49:31.046619] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:39.019 [2024-11-17 00:49:31.046641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.046650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.019 [2024-11-17 00:49:31.046680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:39.019 [2024-11-17 00:49:31.046692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.048480] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:39.019 [2024-11-17 00:49:31.052116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.052171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:39.019 [2024-11-17 00:49:31.052182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:17:39.019 [2024-11-17 00:49:31.052194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.052291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.052303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:39.019 [2024-11-17 00:49:31.052319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:39.019 [2024-11-17 00:49:31.052327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.060372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.060415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.019 [2024-11-17 00:49:31.060425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.978 ms 00:17:39.019 [2024-11-17 00:49:31.060433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.060583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.060595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.019 [2024-11-17 00:49:31.060605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:39.019 [2024-11-17 00:49:31.060613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.060641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.060653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:39.019 [2024-11-17 00:49:31.060662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:39.019 [2024-11-17 00:49:31.060669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.060692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:39.019 [2024-11-17 00:49:31.062714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.062758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.019 [2024-11-17 00:49:31.062772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:17:39.019 [2024-11-17 00:49:31.062780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.062822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.062838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:39.019 [2024-11-17 00:49:31.062849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:39.019 [2024-11-17 00:49:31.062857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.062876] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:39.019 [2024-11-17 00:49:31.062896] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:39.019 [2024-11-17 00:49:31.062934] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:39.019 [2024-11-17 00:49:31.062955] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:39.019 [2024-11-17 00:49:31.063063] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:39.019 [2024-11-17 00:49:31.063075] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:39.019 [2024-11-17 00:49:31.063086] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:39.019 [2024-11-17 00:49:31.063097] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:39.019 [2024-11-17 00:49:31.063106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:39.019 [2024-11-17 00:49:31.063114] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:39.019 [2024-11-17 00:49:31.063127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:39.019 [2024-11-17 00:49:31.063135] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:39.019 [2024-11-17 00:49:31.063143] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:39.019 [2024-11-17 00:49:31.063151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.063162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:39.019 [2024-11-17 00:49:31.063173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:39.019 [2024-11-17 00:49:31.063181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.063269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.019 [2024-11-17 00:49:31.063277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:39.019 [2024-11-17 00:49:31.063285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:39.019 [2024-11-17 00:49:31.063294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.019 [2024-11-17 00:49:31.063419] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:39.019 [2024-11-17 00:49:31.063438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:39.019 [2024-11-17 00:49:31.063448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.019 [2024-11-17 00:49:31.063462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.019 [2024-11-17 00:49:31.063477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:39.019 [2024-11-17 00:49:31.063486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:39.019 [2024-11-17 00:49:31.063494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:39.019 [2024-11-17 00:49:31.063503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:39.019 [2024-11-17 00:49:31.063515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:39.019 [2024-11-17 00:49:31.063523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.019 [2024-11-17 00:49:31.063531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:39.019 [2024-11-17 00:49:31.063539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:39.019 [2024-11-17 00:49:31.063549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.019 [2024-11-17 00:49:31.063558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:39.019 [2024-11-17 00:49:31.063568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:39.019 [2024-11-17 00:49:31.063576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.019 [2024-11-17 00:49:31.063584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:39.020 [2024-11-17 00:49:31.063593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:39.020 [2024-11-17 00:49:31.063601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:39.020 [2024-11-17 00:49:31.063617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.020 [2024-11-17 00:49:31.063634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:39.020 [2024-11-17 00:49:31.063642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.020 [2024-11-17 00:49:31.063663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:39.020 [2024-11-17 00:49:31.063671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.020 [2024-11-17 00:49:31.063687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:39.020 [2024-11-17 00:49:31.063695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.020 [2024-11-17 00:49:31.063711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:39.020 [2024-11-17 00:49:31.063719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.020 [2024-11-17 00:49:31.063735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:39.020 [2024-11-17 00:49:31.063742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:39.020 [2024-11-17 00:49:31.063749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.020 [2024-11-17 00:49:31.063764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:39.020 [2024-11-17 00:49:31.063770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:39.020 [2024-11-17 00:49:31.063777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:39.020 [2024-11-17 00:49:31.063793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:39.020 [2024-11-17 00:49:31.063799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:39.020 [2024-11-17 00:49:31.063814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:39.020 [2024-11-17 00:49:31.063826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.020 [2024-11-17 00:49:31.063836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.020 [2024-11-17 00:49:31.063844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:39.020 [2024-11-17 00:49:31.063851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:39.020 [2024-11-17 00:49:31.063858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:39.020 [2024-11-17 00:49:31.063866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:39.020 [2024-11-17 00:49:31.063873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:39.020 [2024-11-17 00:49:31.063881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:39.020 [2024-11-17 00:49:31.063890] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:39.020 [2024-11-17 00:49:31.063900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.020 [2024-11-17 00:49:31.063909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:39.020 [2024-11-17 00:49:31.063918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:39.020 [2024-11-17 00:49:31.063927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:39.020 [2024-11-17 00:49:31.063934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:39.020 [2024-11-17 00:49:31.063942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:39.020 [2024-11-17 00:49:31.063949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:39.020 [2024-11-17 00:49:31.063957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:39.020 [2024-11-17 00:49:31.063965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:39.020 [2024-11-17 00:49:31.063972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:39.020 [2024-11-17 00:49:31.063980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:39.020 [2024-11-17 00:49:31.063988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:39.020 [2024-11-17 00:49:31.063996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:39.020 [2024-11-17 00:49:31.064003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:39.020 [2024-11-17 00:49:31.064011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:39.020 [2024-11-17 00:49:31.064018] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:39.020 [2024-11-17 00:49:31.064026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.020 [2024-11-17 00:49:31.064034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:39.020 [2024-11-17 00:49:31.064045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:39.020 [2024-11-17 00:49:31.064053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:39.020 [2024-11-17 00:49:31.064061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:39.020 [2024-11-17 00:49:31.064068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.020 [2024-11-17 00:49:31.064077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:39.020 [2024-11-17 00:49:31.064087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:39.020 [2024-11-17 00:49:31.064098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.086092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.086149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.283 [2024-11-17 00:49:31.086163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.927 ms 00:17:39.283 [2024-11-17 00:49:31.086172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.086323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.086336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:39.283 [2024-11-17 00:49:31.086345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:39.283 [2024-11-17 00:49:31.086382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.098198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.098247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.283 [2024-11-17 00:49:31.098259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.783 ms 00:17:39.283 [2024-11-17 00:49:31.098268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.098345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.098380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.283 [2024-11-17 00:49:31.098399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:39.283 [2024-11-17 00:49:31.098408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.098923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.098965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.283 [2024-11-17 00:49:31.098979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:17:39.283 [2024-11-17 00:49:31.098988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.099165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.099177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.283 [2024-11-17 00:49:31.099188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:17:39.283 [2024-11-17 00:49:31.099201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.106539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.106590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.283 [2024-11-17 00:49:31.106601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.306 ms 00:17:39.283 [2024-11-17 00:49:31.106609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.110335] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:39.283 [2024-11-17 00:49:31.110402] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:39.283 [2024-11-17 00:49:31.110415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.110425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:39.283 [2024-11-17 00:49:31.110433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:17:39.283 [2024-11-17 00:49:31.110441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.126042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.126101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:39.283 [2024-11-17 00:49:31.126114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.541 ms 00:17:39.283 [2024-11-17 00:49:31.126122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.128876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.129049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:39.283 [2024-11-17 00:49:31.129066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:17:39.283 [2024-11-17 00:49:31.129074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.131715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.131762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:39.283 [2024-11-17 00:49:31.131781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:17:39.283 [2024-11-17 00:49:31.131789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.132134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.132152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:39.283 [2024-11-17 00:49:31.132162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:17:39.283 [2024-11-17 00:49:31.132173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.155553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.155757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:39.283 [2024-11-17 00:49:31.155820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.356 ms 00:17:39.283 [2024-11-17 00:49:31.155846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.163941] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:39.283 [2024-11-17 00:49:31.183071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.183236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:39.283 [2024-11-17 00:49:31.183294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.060 ms 00:17:39.283 [2024-11-17 00:49:31.183317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.183447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.183478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:39.283 [2024-11-17 00:49:31.183500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:39.283 [2024-11-17 00:49:31.183510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.183576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.183587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:39.283 [2024-11-17 00:49:31.183596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:39.283 [2024-11-17 00:49:31.183604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.183628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.183643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:39.283 [2024-11-17 00:49:31.183651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:39.283 [2024-11-17 00:49:31.183660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.183697] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:39.283 [2024-11-17 00:49:31.183711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.183720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:39.283 [2024-11-17 00:49:31.183730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:39.283 [2024-11-17 00:49:31.183737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.189410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.189591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:39.283 [2024-11-17 00:49:31.189609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.651 ms 00:17:39.283 [2024-11-17 00:49:31.189618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.189708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.283 [2024-11-17 00:49:31.189723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:39.283 [2024-11-17 00:49:31.189732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:39.283 [2024-11-17 00:49:31.189740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.283 [2024-11-17 00:49:31.190782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:39.283 [2024-11-17 00:49:31.192093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 147.461 ms, result 0 00:17:39.283 [2024-11-17 00:49:31.193579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.283 [2024-11-17 00:49:31.200782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:39.546  [2024-11-17T00:49:31.609Z] Copying: 4096/4096 [kB] (average 19 MBps)[2024-11-17 00:49:31.410194] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.546 [2024-11-17 00:49:31.411452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.411615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.546 [2024-11-17 00:49:31.411642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:39.546 [2024-11-17 00:49:31.411651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.411678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.546 [2024-11-17 00:49:31.412320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.412341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.546 [2024-11-17 00:49:31.412351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:17:39.546 [2024-11-17 00:49:31.412381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.414501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.414544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.546 [2024-11-17 00:49:31.414554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.095 ms 00:17:39.546 [2024-11-17 00:49:31.414563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.418959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.418995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.546 [2024-11-17 00:49:31.419006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.374 ms 00:17:39.546 [2024-11-17 00:49:31.419014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.426017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.426168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:39.546 [2024-11-17 00:49:31.426188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.972 ms 00:17:39.546 [2024-11-17 00:49:31.426196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.428873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.428920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.546 [2024-11-17 00:49:31.428931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:17:39.546 [2024-11-17 00:49:31.428950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.433712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.433873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.546 [2024-11-17 00:49:31.433939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:17:39.546 [2024-11-17 00:49:31.433963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.434368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.434556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.546 [2024-11-17 00:49:31.434583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:39.546 [2024-11-17 00:49:31.434602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.437936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.438079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:39.546 [2024-11-17 00:49:31.438095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.302 ms 00:17:39.546 [2024-11-17 00:49:31.438102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.440892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.441037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:39.546 [2024-11-17 00:49:31.441054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:17:39.546 [2024-11-17 00:49:31.441062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.443235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.443274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.546 [2024-11-17 00:49:31.443283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:17:39.546 [2024-11-17 00:49:31.443290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.445689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.546 [2024-11-17 00:49:31.445732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.546 [2024-11-17 00:49:31.445742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.308 ms 00:17:39.546 [2024-11-17 00:49:31.445749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.546 [2024-11-17 00:49:31.445790] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.546 [2024-11-17 00:49:31.445812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.546 [2024-11-17 00:49:31.445898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.445992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.547 [2024-11-17 00:49:31.446581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.548 [2024-11-17 00:49:31.446597] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.548 [2024-11-17 00:49:31.446605] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa34e40b-328a-44ca-be31-18a041db8592 00:17:39.548 [2024-11-17 00:49:31.446620] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.548 [2024-11-17 00:49:31.446628] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.548 [2024-11-17 00:49:31.446634] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.548 [2024-11-17 00:49:31.446642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.548 [2024-11-17 00:49:31.446650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.548 [2024-11-17 00:49:31.446658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.548 [2024-11-17 00:49:31.446665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.548 [2024-11-17 00:49:31.446672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.548 [2024-11-17 00:49:31.446678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.548 [2024-11-17 00:49:31.446685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.548 [2024-11-17 00:49:31.446694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.548 [2024-11-17 00:49:31.446708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:17:39.548 [2024-11-17 00:49:31.446715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.448902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.548 [2024-11-17 00:49:31.448932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.548 [2024-11-17 00:49:31.448942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:17:39.548 [2024-11-17 00:49:31.448961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.449083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.548 [2024-11-17 00:49:31.449092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.548 [2024-11-17 00:49:31.449101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:39.548 [2024-11-17 00:49:31.449111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.456155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.456203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.548 [2024-11-17 00:49:31.456213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.456221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.456276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.456291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.548 [2024-11-17 00:49:31.456299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.456306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.456389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.456401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.548 [2024-11-17 00:49:31.456409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.456416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.456433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.456442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.548 [2024-11-17 00:49:31.456456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.456465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.469886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.469942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.548 [2024-11-17 00:49:31.469953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.469971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.481104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.548 [2024-11-17 00:49:31.481115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.481123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.481182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.548 [2024-11-17 00:49:31.481190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.481199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.481239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.548 [2024-11-17 00:49:31.481248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.481260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.481344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.548 [2024-11-17 00:49:31.481376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.481385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.481435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.548 [2024-11-17 00:49:31.481443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.481452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.481506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.548 [2024-11-17 00:49:31.481515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.481523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.548 [2024-11-17 00:49:31.481584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.548 [2024-11-17 00:49:31.481593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.548 [2024-11-17 00:49:31.481604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.548 [2024-11-17 00:49:31.481766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.278 ms, result 0 00:17:39.811 00:17:39.811 00:17:39.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.811 00:49:31 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86162 00:17:39.811 00:49:31 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86162 00:17:39.811 00:49:31 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:39.811 00:49:31 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86162 ']' 00:17:39.811 00:49:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.811 00:49:31 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.811 00:49:31 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.811 00:49:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.811 00:49:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:39.811 [2024-11-17 00:49:31.814242] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:39.811 [2024-11-17 00:49:31.814414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86162 ] 00:17:40.072 [2024-11-17 00:49:31.965412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.072 [2024-11-17 00:49:32.015163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.642 00:49:32 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.642 00:49:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:40.642 00:49:32 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:40.903 [2024-11-17 00:49:32.818435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.903 [2024-11-17 00:49:32.818515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.166 [2024-11-17 00:49:32.994141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:32.994207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.166 [2024-11-17 00:49:32.994223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:41.166 [2024-11-17 00:49:32.994237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:32.996769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:32.997050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.166 [2024-11-17 00:49:32.997071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.512 ms 00:17:41.166 [2024-11-17 00:49:32.997082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:32.997180] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.166 [2024-11-17 00:49:32.997473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.166 [2024-11-17 00:49:32.997491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:32.997510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.166 [2024-11-17 00:49:32.997521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:41.166 [2024-11-17 00:49:32.997532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:32.999270] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:41.166 [2024-11-17 00:49:33.002929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:33.002981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:41.166 [2024-11-17 00:49:33.002994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:17:41.166 [2024-11-17 00:49:33.003007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:33.003102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:33.003114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:41.166 [2024-11-17 00:49:33.003128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:41.166 [2024-11-17 00:49:33.003136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:33.011005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:33.011186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.166 [2024-11-17 00:49:33.011208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.817 ms 00:17:41.166 [2024-11-17 00:49:33.011217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:33.011338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:33.011350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.166 [2024-11-17 00:49:33.011385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:41.166 [2024-11-17 00:49:33.011393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:33.011425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.166 [2024-11-17 00:49:33.011434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.166 [2024-11-17 00:49:33.011448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:41.166 [2024-11-17 00:49:33.011458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.166 [2024-11-17 00:49:33.011484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:41.166 [2024-11-17 00:49:33.013465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 00:49:33.013508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.167 [2024-11-17 00:49:33.013518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:17:41.167 [2024-11-17 00:49:33.013528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 00:49:33.013571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 00:49:33.013587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.167 [2024-11-17 00:49:33.013595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:41.167 [2024-11-17 00:49:33.013606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 00:49:33.013627] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:41.167 [2024-11-17 00:49:33.013654] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:41.167 [2024-11-17 00:49:33.013695] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:41.167 [2024-11-17 00:49:33.013716] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:41.167 [2024-11-17 00:49:33.013820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.167 [2024-11-17 00:49:33.013834] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.167 [2024-11-17 00:49:33.013845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:41.167 [2024-11-17 00:49:33.013860] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.167 [2024-11-17 00:49:33.013869] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.167 [2024-11-17 00:49:33.013883] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:41.167 [2024-11-17 00:49:33.013890] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.167 [2024-11-17 00:49:33.013899] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.167 [2024-11-17 00:49:33.013907] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.167 [2024-11-17 00:49:33.013917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 00:49:33.013926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.167 [2024-11-17 00:49:33.013937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:41.167 [2024-11-17 00:49:33.013945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 00:49:33.014035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.167 [2024-11-17 00:49:33.014044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.167 [2024-11-17 00:49:33.014053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:41.167 [2024-11-17 00:49:33.014060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.167 [2024-11-17 00:49:33.014163] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.167 [2024-11-17 00:49:33.014175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.167 [2024-11-17 00:49:33.014189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.167 [2024-11-17 00:49:33.014201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.167 [2024-11-17 00:49:33.014222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:41.167 [2024-11-17 00:49:33.014243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.167 [2024-11-17 00:49:33.014260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.167 [2024-11-17 00:49:33.014278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.167 [2024-11-17 00:49:33.014285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:41.167 [2024-11-17 00:49:33.014295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.167 [2024-11-17 00:49:33.014303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.167 [2024-11-17 00:49:33.014313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:41.167 [2024-11-17 00:49:33.014320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.167 [2024-11-17 00:49:33.014338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:41.167 [2024-11-17 00:49:33.014347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.167 [2024-11-17 00:49:33.014614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.167 [2024-11-17 00:49:33.014655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.167 [2024-11-17 00:49:33.014675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.167 [2024-11-17 00:49:33.014715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.167 [2024-11-17 00:49:33.014736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.167 [2024-11-17 00:49:33.014775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.167 [2024-11-17 00:49:33.014793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.167 [2024-11-17 00:49:33.014887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.167 [2024-11-17 00:49:33.014914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:41.167 [2024-11-17 00:49:33.014934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.167 [2024-11-17 00:49:33.014956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.167 [2024-11-17 00:49:33.014977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:41.167 [2024-11-17 00:49:33.015000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.167 [2024-11-17 00:49:33.015019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.167 [2024-11-17 00:49:33.015039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:41.167 [2024-11-17 00:49:33.015058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.167 [2024-11-17 00:49:33.015078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.167 [2024-11-17 00:49:33.015097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:41.167 [2024-11-17 00:49:33.015116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.167 [2024-11-17 00:49:33.015135] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.167 [2024-11-17 00:49:33.015201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.167 [2024-11-17 00:49:33.015224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.167 [2024-11-17 00:49:33.015246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.167 [2024-11-17 00:49:33.015267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.167 [2024-11-17 00:49:33.015287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.167 [2024-11-17 00:49:33.015347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.167 [2024-11-17 00:49:33.015389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.167 [2024-11-17 00:49:33.015409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.167 [2024-11-17 00:49:33.015432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.167 [2024-11-17 00:49:33.015453] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.167 [2024-11-17 00:49:33.015521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.167 [2024-11-17 00:49:33.015552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:41.167 [2024-11-17 00:49:33.015585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:41.167 [2024-11-17 00:49:33.015774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:41.167 [2024-11-17 00:49:33.015816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:41.167 [2024-11-17 00:49:33.015836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:41.167 [2024-11-17 00:49:33.015847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:41.167 [2024-11-17 00:49:33.015855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:41.167 [2024-11-17 00:49:33.015865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:41.167 [2024-11-17 00:49:33.015872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:41.167 [2024-11-17 00:49:33.015882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:41.167 [2024-11-17 00:49:33.015889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:41.167 [2024-11-17 00:49:33.015899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:41.167 [2024-11-17 00:49:33.015908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:41.167 [2024-11-17 00:49:33.015920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:41.168 [2024-11-17 00:49:33.015927] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.168 [2024-11-17 00:49:33.015939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.168 [2024-11-17 00:49:33.015950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.168 [2024-11-17 00:49:33.015959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.168 [2024-11-17 00:49:33.015967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.168 [2024-11-17 00:49:33.015976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.168 [2024-11-17 00:49:33.015986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.015997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.168 [2024-11-17 00:49:33.016006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:17:41.168 [2024-11-17 00:49:33.016016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.029789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.029962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.168 [2024-11-17 00:49:33.029980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.676 ms 00:17:41.168 [2024-11-17 00:49:33.029990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.030118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.030134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.168 [2024-11-17 00:49:33.030146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:41.168 [2024-11-17 00:49:33.030156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.042020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.042067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.168 [2024-11-17 00:49:33.042077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.843 ms 00:17:41.168 [2024-11-17 00:49:33.042088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.042156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.042176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.168 [2024-11-17 00:49:33.042185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.168 [2024-11-17 00:49:33.042195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.042719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.042744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.168 [2024-11-17 00:49:33.042756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:17:41.168 [2024-11-17 00:49:33.042768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.042921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.042937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.168 [2024-11-17 00:49:33.042949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:41.168 [2024-11-17 00:49:33.042964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.064856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.064931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.168 [2024-11-17 00:49:33.064948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.865 ms 00:17:41.168 [2024-11-17 00:49:33.064963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.071112] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:41.168 [2024-11-17 00:49:33.071257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.168 [2024-11-17 00:49:33.071314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.071346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.168 [2024-11-17 00:49:33.071451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.169 ms 00:17:41.168 [2024-11-17 00:49:33.071481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.092484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.092557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.168 [2024-11-17 00:49:33.092594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.751 ms 00:17:41.168 [2024-11-17 00:49:33.092611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.096079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.096310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.168 [2024-11-17 00:49:33.096333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:17:41.168 [2024-11-17 00:49:33.096344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.099513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.099576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.168 [2024-11-17 00:49:33.099588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:17:41.168 [2024-11-17 00:49:33.099599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.100000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.100021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.168 [2024-11-17 00:49:33.100032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:41.168 [2024-11-17 00:49:33.100043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.131965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.132029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.168 [2024-11-17 00:49:33.132049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.897 ms 00:17:41.168 [2024-11-17 00:49:33.132064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.140691] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.168 [2024-11-17 00:49:33.165763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.165818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.168 [2024-11-17 00:49:33.165835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.593 ms 00:17:41.168 [2024-11-17 00:49:33.165845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.165946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.165957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.168 [2024-11-17 00:49:33.165976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:41.168 [2024-11-17 00:49:33.165988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.166060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.166070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.168 [2024-11-17 00:49:33.166086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:41.168 [2024-11-17 00:49:33.166095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.166125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.166135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.168 [2024-11-17 00:49:33.166151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:41.168 [2024-11-17 00:49:33.166162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.166210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.168 [2024-11-17 00:49:33.166220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.166231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.168 [2024-11-17 00:49:33.166239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:41.168 [2024-11-17 00:49:33.166250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.173892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.174171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.168 [2024-11-17 00:49:33.174195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.619 ms 00:17:41.168 [2024-11-17 00:49:33.174208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.174474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.168 [2024-11-17 00:49:33.174510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.168 [2024-11-17 00:49:33.174522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:41.168 [2024-11-17 00:49:33.174535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.168 [2024-11-17 00:49:33.175897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.168 [2024-11-17 00:49:33.177542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 181.345 ms, result 0 00:17:41.168 [2024-11-17 00:49:33.179773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.168 Some configs were skipped because the RPC state that can call them passed over. 00:17:41.168 00:49:33 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:41.431 [2024-11-17 00:49:33.417412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.431 [2024-11-17 00:49:33.417611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:41.431 [2024-11-17 00:49:33.417686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:17:41.431 [2024-11-17 00:49:33.417712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.431 [2024-11-17 00:49:33.417773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.576 ms, result 0 00:17:41.431 true 00:17:41.431 00:49:33 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:41.693 [2024-11-17 00:49:33.629042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.693 [2024-11-17 00:49:33.629239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:41.693 [2024-11-17 00:49:33.629303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:17:41.693 [2024-11-17 00:49:33.629329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.693 [2024-11-17 00:49:33.629406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.947 ms, result 0 00:17:41.693 true 00:17:41.693 00:49:33 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86162 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86162 ']' 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86162 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86162 00:17:41.693 killing process with pid 86162 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86162' 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86162 00:17:41.693 00:49:33 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86162 00:17:41.956 [2024-11-17 00:49:33.811802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.811858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.956 [2024-11-17 00:49:33.811875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:41.956 [2024-11-17 00:49:33.811883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.811911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:41.956 [2024-11-17 00:49:33.812595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.812628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.956 [2024-11-17 00:49:33.812639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:17:41.956 [2024-11-17 00:49:33.812649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.812951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.812971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.956 [2024-11-17 00:49:33.812981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:41.956 [2024-11-17 00:49:33.812991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.817540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.817579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.956 [2024-11-17 00:49:33.817590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.530 ms 00:17:41.956 [2024-11-17 00:49:33.817600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.824582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.824634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.956 [2024-11-17 00:49:33.824645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.941 ms 00:17:41.956 [2024-11-17 00:49:33.824657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.827281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.827483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.956 [2024-11-17 00:49:33.827500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:17:41.956 [2024-11-17 00:49:33.827510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.831828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.831869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.956 [2024-11-17 00:49:33.831879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.279 ms 00:17:41.956 [2024-11-17 00:49:33.831890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.832019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.832037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.956 [2024-11-17 00:49:33.832046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:41.956 [2024-11-17 00:49:33.832056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.834931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.835060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:41.956 [2024-11-17 00:49:33.835074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.856 ms 00:17:41.956 [2024-11-17 00:49:33.835086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.837528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.837564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:41.956 [2024-11-17 00:49:33.837573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:17:41.956 [2024-11-17 00:49:33.837582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.839566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.839604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:41.956 [2024-11-17 00:49:33.839614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:17:41.956 [2024-11-17 00:49:33.839624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.841650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.956 [2024-11-17 00:49:33.841769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.956 [2024-11-17 00:49:33.841783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:17:41.956 [2024-11-17 00:49:33.841792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.956 [2024-11-17 00:49:33.841824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.956 [2024-11-17 00:49:33.841841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.956 [2024-11-17 00:49:33.841851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.956 [2024-11-17 00:49:33.841864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.956 [2024-11-17 00:49:33.841873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.841996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.957 [2024-11-17 00:49:33.842669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.958 [2024-11-17 00:49:33.842748] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.958 [2024-11-17 00:49:33.842757] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa34e40b-328a-44ca-be31-18a041db8592 00:17:41.958 [2024-11-17 00:49:33.842766] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.958 [2024-11-17 00:49:33.842774] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.958 [2024-11-17 00:49:33.842787] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.958 [2024-11-17 00:49:33.842798] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.958 [2024-11-17 00:49:33.842808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.958 [2024-11-17 00:49:33.842815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.958 [2024-11-17 00:49:33.842829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.958 [2024-11-17 00:49:33.842835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.958 [2024-11-17 00:49:33.842844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.958 [2024-11-17 00:49:33.842851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.958 [2024-11-17 00:49:33.842864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.958 [2024-11-17 00:49:33.842872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:17:41.958 [2024-11-17 00:49:33.842886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.844592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.958 [2024-11-17 00:49:33.844618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.958 [2024-11-17 00:49:33.844631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:17:41.958 [2024-11-17 00:49:33.844641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.844741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.958 [2024-11-17 00:49:33.844752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.958 [2024-11-17 00:49:33.844761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:41.958 [2024-11-17 00:49:33.844770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.851989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.852027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.958 [2024-11-17 00:49:33.852036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.852051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.852121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.852132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.958 [2024-11-17 00:49:33.852140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.852152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.852193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.852207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.958 [2024-11-17 00:49:33.852215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.852224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.852244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.852258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.958 [2024-11-17 00:49:33.852266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.852275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.864965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.865010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.958 [2024-11-17 00:49:33.865021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.865031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.874844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.874895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.958 [2024-11-17 00:49:33.874906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.874918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.874957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.874974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.958 [2024-11-17 00:49:33.874982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.874994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.875028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.875038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.958 [2024-11-17 00:49:33.875046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.875057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.875148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.875161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.958 [2024-11-17 00:49:33.875169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.875181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.875214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.875226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.958 [2024-11-17 00:49:33.875235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.875247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.875291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.875302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.958 [2024-11-17 00:49:33.875310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.875321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.875396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.958 [2024-11-17 00:49:33.875410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.958 [2024-11-17 00:49:33.875418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.958 [2024-11-17 00:49:33.875427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.958 [2024-11-17 00:49:33.875578] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.754 ms, result 0 00:17:42.221 00:49:34 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:42.221 [2024-11-17 00:49:34.142617] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:42.221 [2024-11-17 00:49:34.142928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86198 ] 00:17:42.482 [2024-11-17 00:49:34.293778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.482 [2024-11-17 00:49:34.349016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.483 [2024-11-17 00:49:34.463713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.483 [2024-11-17 00:49:34.464022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.747 [2024-11-17 00:49:34.625819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.626038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.747 [2024-11-17 00:49:34.626062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.747 [2024-11-17 00:49:34.626072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.628715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.628771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.747 [2024-11-17 00:49:34.628785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:17:42.747 [2024-11-17 00:49:34.628794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.628905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.747 [2024-11-17 00:49:34.629163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.747 [2024-11-17 00:49:34.629182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.629191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.747 [2024-11-17 00:49:34.629204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:42.747 [2024-11-17 00:49:34.629212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.631309] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:42.747 [2024-11-17 00:49:34.635380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.635545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:42.747 [2024-11-17 00:49:34.635607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:17:42.747 [2024-11-17 00:49:34.635636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.635807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.635941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:42.747 [2024-11-17 00:49:34.635964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:42.747 [2024-11-17 00:49:34.636155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.645131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.645284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.747 [2024-11-17 00:49:34.645342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.893 ms 00:17:42.747 [2024-11-17 00:49:34.645390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.645562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.645681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.747 [2024-11-17 00:49:34.645709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:42.747 [2024-11-17 00:49:34.645731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.645818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.645852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.747 [2024-11-17 00:49:34.645874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:42.747 [2024-11-17 00:49:34.645893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.645940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:42.747 [2024-11-17 00:49:34.648016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.648162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.747 [2024-11-17 00:49:34.648222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:17:42.747 [2024-11-17 00:49:34.648245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.648329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.648457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.747 [2024-11-17 00:49:34.648515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:42.747 [2024-11-17 00:49:34.648543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.648743] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:42.747 [2024-11-17 00:49:34.648821] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:42.747 [2024-11-17 00:49:34.648913] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:42.747 [2024-11-17 00:49:34.649070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:42.747 [2024-11-17 00:49:34.649184] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:42.747 [2024-11-17 00:49:34.649204] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.747 [2024-11-17 00:49:34.649216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:42.747 [2024-11-17 00:49:34.649234] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.747 [2024-11-17 00:49:34.649248] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.747 [2024-11-17 00:49:34.649257] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:42.747 [2024-11-17 00:49:34.649264] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.747 [2024-11-17 00:49:34.649273] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:42.747 [2024-11-17 00:49:34.649281] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:42.747 [2024-11-17 00:49:34.649290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.649301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.747 [2024-11-17 00:49:34.649313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:17:42.747 [2024-11-17 00:49:34.649321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.649529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.747 [2024-11-17 00:49:34.649597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.747 [2024-11-17 00:49:34.649647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:17:42.747 [2024-11-17 00:49:34.649671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.747 [2024-11-17 00:49:34.649817] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.747 [2024-11-17 00:49:34.649855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.747 [2024-11-17 00:49:34.649877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.747 [2024-11-17 00:49:34.649932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.747 [2024-11-17 00:49:34.649955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.747 [2024-11-17 00:49:34.649975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.747 [2024-11-17 00:49:34.649984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:42.747 [2024-11-17 00:49:34.649991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.747 [2024-11-17 00:49:34.650002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:42.747 [2024-11-17 00:49:34.650010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.747 [2024-11-17 00:49:34.650018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.747 [2024-11-17 00:49:34.650024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:42.747 [2024-11-17 00:49:34.650031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.747 [2024-11-17 00:49:34.650037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.747 [2024-11-17 00:49:34.650044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:42.747 [2024-11-17 00:49:34.650051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.747 [2024-11-17 00:49:34.650057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.747 [2024-11-17 00:49:34.650064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:42.747 [2024-11-17 00:49:34.650071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.747 [2024-11-17 00:49:34.650078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.747 [2024-11-17 00:49:34.650085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:42.747 [2024-11-17 00:49:34.650091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.748 [2024-11-17 00:49:34.650098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.748 [2024-11-17 00:49:34.650105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:42.748 [2024-11-17 00:49:34.650120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.748 [2024-11-17 00:49:34.650127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.748 [2024-11-17 00:49:34.650135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:42.748 [2024-11-17 00:49:34.650142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.748 [2024-11-17 00:49:34.650149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.748 [2024-11-17 00:49:34.650155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:42.748 [2024-11-17 00:49:34.650162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.748 [2024-11-17 00:49:34.650168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.748 [2024-11-17 00:49:34.650175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:42.748 [2024-11-17 00:49:34.650182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.748 [2024-11-17 00:49:34.650189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.748 [2024-11-17 00:49:34.650196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:42.748 [2024-11-17 00:49:34.650203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.748 [2024-11-17 00:49:34.650210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:42.748 [2024-11-17 00:49:34.650219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:42.748 [2024-11-17 00:49:34.650226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.748 [2024-11-17 00:49:34.650236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:42.748 [2024-11-17 00:49:34.650244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:42.748 [2024-11-17 00:49:34.650251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.748 [2024-11-17 00:49:34.650257] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.748 [2024-11-17 00:49:34.650266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.748 [2024-11-17 00:49:34.650273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.748 [2024-11-17 00:49:34.650281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.748 [2024-11-17 00:49:34.650289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.748 [2024-11-17 00:49:34.650297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.748 [2024-11-17 00:49:34.650304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.748 [2024-11-17 00:49:34.650311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.748 [2024-11-17 00:49:34.650318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.748 [2024-11-17 00:49:34.650324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.748 [2024-11-17 00:49:34.650333] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.748 [2024-11-17 00:49:34.650344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.748 [2024-11-17 00:49:34.650372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:42.748 [2024-11-17 00:49:34.650383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:42.748 [2024-11-17 00:49:34.650390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:42.748 [2024-11-17 00:49:34.650398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:42.748 [2024-11-17 00:49:34.650406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:42.748 [2024-11-17 00:49:34.650414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:42.748 [2024-11-17 00:49:34.650421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:42.748 [2024-11-17 00:49:34.650428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:42.748 [2024-11-17 00:49:34.650436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:42.748 [2024-11-17 00:49:34.650443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:42.748 [2024-11-17 00:49:34.650451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:42.748 [2024-11-17 00:49:34.650458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:42.748 [2024-11-17 00:49:34.650466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:42.748 [2024-11-17 00:49:34.650473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:42.748 [2024-11-17 00:49:34.650481] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.748 [2024-11-17 00:49:34.650493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.748 [2024-11-17 00:49:34.650503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.748 [2024-11-17 00:49:34.650514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.748 [2024-11-17 00:49:34.650522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.748 [2024-11-17 00:49:34.650529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.748 [2024-11-17 00:49:34.650538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.650550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.748 [2024-11-17 00:49:34.650561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:17:42.748 [2024-11-17 00:49:34.650568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.675892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.675961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.748 [2024-11-17 00:49:34.675979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.257 ms 00:17:42.748 [2024-11-17 00:49:34.676000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.676206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.676223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.748 [2024-11-17 00:49:34.676236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:42.748 [2024-11-17 00:49:34.676252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.690280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.690497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.748 [2024-11-17 00:49:34.690517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.997 ms 00:17:42.748 [2024-11-17 00:49:34.690526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.690610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.690621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.748 [2024-11-17 00:49:34.690634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.748 [2024-11-17 00:49:34.690642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.691235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.691269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.748 [2024-11-17 00:49:34.691281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:17:42.748 [2024-11-17 00:49:34.691290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.691472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.691490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.748 [2024-11-17 00:49:34.691500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:17:42.748 [2024-11-17 00:49:34.691524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.700124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.700182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.748 [2024-11-17 00:49:34.700193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.576 ms 00:17:42.748 [2024-11-17 00:49:34.700202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.704418] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:42.748 [2024-11-17 00:49:34.704471] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:42.748 [2024-11-17 00:49:34.704483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.704492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:42.748 [2024-11-17 00:49:34.704501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.186 ms 00:17:42.748 [2024-11-17 00:49:34.704509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.720513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.720573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:42.748 [2024-11-17 00:49:34.720587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.892 ms 00:17:42.748 [2024-11-17 00:49:34.720596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.723715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.748 [2024-11-17 00:49:34.723763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:42.748 [2024-11-17 00:49:34.723774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:17:42.748 [2024-11-17 00:49:34.723781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.748 [2024-11-17 00:49:34.726330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.726540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:42.749 [2024-11-17 00:49:34.726572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:17:42.749 [2024-11-17 00:49:34.726580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.726928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.726941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.749 [2024-11-17 00:49:34.726954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:42.749 [2024-11-17 00:49:34.726962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.753761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.753982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:42.749 [2024-11-17 00:49:34.754005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.774 ms 00:17:42.749 [2024-11-17 00:49:34.754024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.762295] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:42.749 [2024-11-17 00:49:34.783287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.783342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.749 [2024-11-17 00:49:34.783376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.149 ms 00:17:42.749 [2024-11-17 00:49:34.783385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.783483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.783500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:42.749 [2024-11-17 00:49:34.783509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:42.749 [2024-11-17 00:49:34.783521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.783584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.783599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.749 [2024-11-17 00:49:34.783608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:42.749 [2024-11-17 00:49:34.783620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.783643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.783652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.749 [2024-11-17 00:49:34.783664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.749 [2024-11-17 00:49:34.783672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.783711] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:42.749 [2024-11-17 00:49:34.783725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.783734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:42.749 [2024-11-17 00:49:34.783746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:42.749 [2024-11-17 00:49:34.783754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.790054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.790110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.749 [2024-11-17 00:49:34.790122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.278 ms 00:17:42.749 [2024-11-17 00:49:34.790130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.790234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.749 [2024-11-17 00:49:34.790250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.749 [2024-11-17 00:49:34.790259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:42.749 [2024-11-17 00:49:34.790267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.749 [2024-11-17 00:49:34.791438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.749 [2024-11-17 00:49:34.792893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 165.245 ms, result 0 00:17:42.749 [2024-11-17 00:49:34.794524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.749 [2024-11-17 00:49:34.801751] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.140  [2024-11-17T00:49:37.148Z] Copying: 14/256 [MB] (14 MBps) [2024-11-17T00:49:38.094Z] Copying: 24/256 [MB] (10 MBps) [2024-11-17T00:49:39.039Z] Copying: 35/256 [MB] (10 MBps) [2024-11-17T00:49:39.980Z] Copying: 45/256 [MB] (10 MBps) [2024-11-17T00:49:40.927Z] Copying: 60/256 [MB] (14 MBps) [2024-11-17T00:49:41.872Z] Copying: 72/256 [MB] (12 MBps) [2024-11-17T00:49:43.257Z] Copying: 86/256 [MB] (13 MBps) [2024-11-17T00:49:44.204Z] Copying: 99/256 [MB] (13 MBps) [2024-11-17T00:49:45.148Z] Copying: 113/256 [MB] (13 MBps) [2024-11-17T00:49:46.093Z] Copying: 125/256 [MB] (12 MBps) [2024-11-17T00:49:47.038Z] Copying: 138/256 [MB] (13 MBps) [2024-11-17T00:49:47.980Z] Copying: 151/256 [MB] (12 MBps) [2024-11-17T00:49:48.926Z] Copying: 168/256 [MB] (17 MBps) [2024-11-17T00:49:49.869Z] Copying: 187/256 [MB] (18 MBps) [2024-11-17T00:49:51.255Z] Copying: 201/256 [MB] (13 MBps) [2024-11-17T00:49:52.201Z] Copying: 217/256 [MB] (16 MBps) [2024-11-17T00:49:53.143Z] Copying: 231/256 [MB] (13 MBps) [2024-11-17T00:49:53.403Z] Copying: 243/256 [MB] (11 MBps) [2024-11-17T00:49:53.977Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-17 00:49:53.771988] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:01.914 [2024-11-17 00:49:53.774320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.774406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:01.914 [2024-11-17 00:49:53.774431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:01.914 [2024-11-17 00:49:53.774441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.774468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:01.914 [2024-11-17 00:49:53.775216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.775266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:01.914 [2024-11-17 00:49:53.775278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:18:01.914 [2024-11-17 00:49:53.775287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.775599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.775614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:01.914 [2024-11-17 00:49:53.775624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:18:01.914 [2024-11-17 00:49:53.775633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.779350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.779395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:01.914 [2024-11-17 00:49:53.779405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:18:01.914 [2024-11-17 00:49:53.779413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.787064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.787293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:01.914 [2024-11-17 00:49:53.787317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.630 ms 00:18:01.914 [2024-11-17 00:49:53.787327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.790604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.790666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:01.914 [2024-11-17 00:49:53.790677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:18:01.914 [2024-11-17 00:49:53.790701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.795550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.795606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:01.914 [2024-11-17 00:49:53.795627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:18:01.914 [2024-11-17 00:49:53.795636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.795780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.795793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:01.914 [2024-11-17 00:49:53.795802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:01.914 [2024-11-17 00:49:53.795810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.799620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.799674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:01.914 [2024-11-17 00:49:53.799685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.782 ms 00:18:01.914 [2024-11-17 00:49:53.799694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.802738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.802792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:01.914 [2024-11-17 00:49:53.802802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:18:01.914 [2024-11-17 00:49:53.802811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.805224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.805276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:01.914 [2024-11-17 00:49:53.805288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:18:01.914 [2024-11-17 00:49:53.805295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.807562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.914 [2024-11-17 00:49:53.807613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:01.914 [2024-11-17 00:49:53.807624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:18:01.914 [2024-11-17 00:49:53.807631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.914 [2024-11-17 00:49:53.807679] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:01.914 [2024-11-17 00:49:53.807704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:01.914 [2024-11-17 00:49:53.807819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.807999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:01.915 [2024-11-17 00:49:53.808563] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:01.915 [2024-11-17 00:49:53.808602] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa34e40b-328a-44ca-be31-18a041db8592 00:18:01.916 [2024-11-17 00:49:53.808626] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:01.916 [2024-11-17 00:49:53.808634] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:01.916 [2024-11-17 00:49:53.808642] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:01.916 [2024-11-17 00:49:53.808652] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:01.916 [2024-11-17 00:49:53.808664] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:01.916 [2024-11-17 00:49:53.808673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:01.916 [2024-11-17 00:49:53.808681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:01.916 [2024-11-17 00:49:53.808689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:01.916 [2024-11-17 00:49:53.808697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:01.916 [2024-11-17 00:49:53.808705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.916 [2024-11-17 00:49:53.808720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:01.916 [2024-11-17 00:49:53.808737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:18:01.916 [2024-11-17 00:49:53.808744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.811174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.916 [2024-11-17 00:49:53.811210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:01.916 [2024-11-17 00:49:53.811221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:18:01.916 [2024-11-17 00:49:53.811231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.811390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.916 [2024-11-17 00:49:53.811410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:01.916 [2024-11-17 00:49:53.811419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:01.916 [2024-11-17 00:49:53.811427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.819302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.819352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.916 [2024-11-17 00:49:53.819389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.819399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.819497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.819511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.916 [2024-11-17 00:49:53.819520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.819529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.819582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.819593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.916 [2024-11-17 00:49:53.819603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.819617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.819639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.819649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.916 [2024-11-17 00:49:53.819662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.819670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.833623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.833676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.916 [2024-11-17 00:49:53.833697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.833706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.843885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.843945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.916 [2024-11-17 00:49:53.843956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.843964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.844010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.844026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:01.916 [2024-11-17 00:49:53.844035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.844046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.844077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.844086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:01.916 [2024-11-17 00:49:53.844095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.844106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.844184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.844199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:01.916 [2024-11-17 00:49:53.844208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.844220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.844254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.844265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:01.916 [2024-11-17 00:49:53.844277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.844286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.844332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.844342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:01.916 [2024-11-17 00:49:53.844369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.844380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.844425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:01.916 [2024-11-17 00:49:53.844437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:01.916 [2024-11-17 00:49:53.844451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:01.916 [2024-11-17 00:49:53.844463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.916 [2024-11-17 00:49:53.844637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.265 ms, result 0 00:18:02.178 00:18:02.178 00:18:02.178 00:49:54 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:02.750 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:02.750 00:49:54 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:02.750 00:49:54 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:02.750 00:49:54 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:02.750 00:49:54 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:02.750 00:49:54 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:02.750 00:49:54 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:02.750 Process with pid 86162 is not found 00:18:02.750 00:49:54 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86162 00:18:02.750 00:49:54 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86162 ']' 00:18:02.750 00:49:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86162 00:18:02.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86162) - No such process 00:18:02.750 00:49:54 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86162 is not found' 00:18:02.750 00:18:02.750 real 1m17.993s 00:18:02.750 user 1m40.360s 00:18:02.750 sys 0m5.715s 00:18:02.750 ************************************ 00:18:02.750 END TEST ftl_trim 00:18:02.750 ************************************ 00:18:02.750 00:49:54 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.750 00:49:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:02.750 00:49:54 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:02.750 00:49:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:02.750 00:49:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.750 00:49:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:03.012 ************************************ 00:18:03.012 START TEST ftl_restore 00:18:03.012 ************************************ 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:03.012 * Looking for test storage... 00:18:03.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.012 00:49:54 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.012 --rc genhtml_branch_coverage=1 00:18:03.012 --rc genhtml_function_coverage=1 00:18:03.012 --rc genhtml_legend=1 00:18:03.012 --rc geninfo_all_blocks=1 00:18:03.012 --rc geninfo_unexecuted_blocks=1 00:18:03.012 00:18:03.012 ' 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.012 --rc genhtml_branch_coverage=1 00:18:03.012 --rc genhtml_function_coverage=1 00:18:03.012 --rc genhtml_legend=1 00:18:03.012 --rc geninfo_all_blocks=1 00:18:03.012 --rc geninfo_unexecuted_blocks=1 00:18:03.012 00:18:03.012 ' 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.012 --rc genhtml_branch_coverage=1 00:18:03.012 --rc genhtml_function_coverage=1 00:18:03.012 --rc genhtml_legend=1 00:18:03.012 --rc geninfo_all_blocks=1 00:18:03.012 --rc geninfo_unexecuted_blocks=1 00:18:03.012 00:18:03.012 ' 00:18:03.012 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.012 --rc genhtml_branch_coverage=1 00:18:03.012 --rc genhtml_function_coverage=1 00:18:03.012 --rc genhtml_legend=1 00:18:03.012 --rc geninfo_all_blocks=1 00:18:03.012 --rc geninfo_unexecuted_blocks=1 00:18:03.012 00:18:03.012 ' 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.012 00:49:54 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.f1CPtkA5Wc 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86490 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86490 00:18:03.013 00:49:54 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.013 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86490 ']' 00:18:03.013 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.013 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.013 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.013 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.013 00:49:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:03.277 [2024-11-17 00:49:55.094637] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:03.277 [2024-11-17 00:49:55.094791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86490 ] 00:18:03.277 [2024-11-17 00:49:55.248769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.277 [2024-11-17 00:49:55.300196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.896 00:49:55 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.896 00:49:55 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:03.896 00:49:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:03.896 00:49:55 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:03.896 00:49:55 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:03.896 00:49:55 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:03.896 00:49:55 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:03.896 00:49:55 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:04.470 00:49:56 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:04.470 00:49:56 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:04.470 00:49:56 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:04.470 { 00:18:04.470 "name": "nvme0n1", 00:18:04.470 "aliases": [ 00:18:04.470 "c907d453-8f5d-4d3e-aca0-dcf11b4f7777" 00:18:04.470 ], 00:18:04.470 "product_name": "NVMe disk", 00:18:04.470 "block_size": 4096, 00:18:04.470 "num_blocks": 1310720, 00:18:04.470 "uuid": "c907d453-8f5d-4d3e-aca0-dcf11b4f7777", 00:18:04.470 "numa_id": -1, 00:18:04.470 "assigned_rate_limits": { 00:18:04.470 "rw_ios_per_sec": 0, 00:18:04.470 "rw_mbytes_per_sec": 0, 00:18:04.470 "r_mbytes_per_sec": 0, 00:18:04.470 "w_mbytes_per_sec": 0 00:18:04.470 }, 00:18:04.470 "claimed": true, 00:18:04.470 "claim_type": "read_many_write_one", 00:18:04.470 "zoned": false, 00:18:04.470 "supported_io_types": { 00:18:04.470 "read": true, 00:18:04.470 "write": true, 00:18:04.470 "unmap": true, 00:18:04.470 "flush": true, 00:18:04.470 "reset": true, 00:18:04.470 "nvme_admin": true, 00:18:04.470 "nvme_io": true, 00:18:04.470 "nvme_io_md": false, 00:18:04.470 "write_zeroes": true, 00:18:04.470 "zcopy": false, 00:18:04.470 "get_zone_info": false, 00:18:04.470 "zone_management": false, 00:18:04.470 "zone_append": false, 00:18:04.470 "compare": true, 00:18:04.470 "compare_and_write": false, 00:18:04.470 "abort": true, 00:18:04.470 "seek_hole": false, 00:18:04.470 "seek_data": false, 00:18:04.470 "copy": true, 00:18:04.470 "nvme_iov_md": false 00:18:04.470 }, 00:18:04.470 "driver_specific": { 00:18:04.470 "nvme": [ 00:18:04.470 { 00:18:04.470 "pci_address": "0000:00:11.0", 00:18:04.470 "trid": { 00:18:04.470 "trtype": "PCIe", 00:18:04.470 "traddr": "0000:00:11.0" 00:18:04.470 }, 00:18:04.470 "ctrlr_data": { 00:18:04.470 "cntlid": 0, 00:18:04.470 "vendor_id": "0x1b36", 00:18:04.470 "model_number": "QEMU NVMe Ctrl", 00:18:04.470 "serial_number": "12341", 00:18:04.470 "firmware_revision": "8.0.0", 00:18:04.470 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:04.470 "oacs": { 00:18:04.470 "security": 0, 00:18:04.470 "format": 1, 00:18:04.470 "firmware": 0, 00:18:04.470 "ns_manage": 1 00:18:04.470 }, 00:18:04.470 "multi_ctrlr": false, 00:18:04.470 "ana_reporting": false 00:18:04.470 }, 00:18:04.470 "vs": { 00:18:04.470 "nvme_version": "1.4" 00:18:04.470 }, 00:18:04.470 "ns_data": { 00:18:04.470 "id": 1, 00:18:04.470 "can_share": false 00:18:04.470 } 00:18:04.470 } 00:18:04.470 ], 00:18:04.470 "mp_policy": "active_passive" 00:18:04.470 } 00:18:04.470 } 00:18:04.470 ]' 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:04.470 00:49:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:04.470 00:49:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:04.470 00:49:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:04.470 00:49:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:04.470 00:49:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:04.732 00:49:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:04.732 00:49:56 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=9a1d17b6-263d-4558-8fe4-ead9c9ad21a9 00:18:04.732 00:49:56 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:04.732 00:49:56 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a1d17b6-263d-4558-8fe4-ead9c9ad21a9 00:18:04.992 00:49:56 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:05.253 00:49:57 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=f97ef6d5-c1d6-4d91-9a85-545fcf30333b 00:18:05.253 00:49:57 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f97ef6d5-c1d6-4d91-9a85-545fcf30333b 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:05.515 00:49:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:05.515 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:05.515 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:05.515 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:05.515 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:05.515 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:05.777 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:05.777 { 00:18:05.777 "name": "93d361d1-d8a1-4af4-bbfd-4fd27b75174c", 00:18:05.777 "aliases": [ 00:18:05.777 "lvs/nvme0n1p0" 00:18:05.777 ], 00:18:05.777 "product_name": "Logical Volume", 00:18:05.777 "block_size": 4096, 00:18:05.777 "num_blocks": 26476544, 00:18:05.777 "uuid": "93d361d1-d8a1-4af4-bbfd-4fd27b75174c", 00:18:05.777 "assigned_rate_limits": { 00:18:05.777 "rw_ios_per_sec": 0, 00:18:05.777 "rw_mbytes_per_sec": 0, 00:18:05.777 "r_mbytes_per_sec": 0, 00:18:05.777 "w_mbytes_per_sec": 0 00:18:05.777 }, 00:18:05.777 "claimed": false, 00:18:05.777 "zoned": false, 00:18:05.777 "supported_io_types": { 00:18:05.777 "read": true, 00:18:05.777 "write": true, 00:18:05.777 "unmap": true, 00:18:05.777 "flush": false, 00:18:05.777 "reset": true, 00:18:05.777 "nvme_admin": false, 00:18:05.777 "nvme_io": false, 00:18:05.777 "nvme_io_md": false, 00:18:05.777 "write_zeroes": true, 00:18:05.777 "zcopy": false, 00:18:05.777 "get_zone_info": false, 00:18:05.777 "zone_management": false, 00:18:05.777 "zone_append": false, 00:18:05.777 "compare": false, 00:18:05.778 "compare_and_write": false, 00:18:05.778 "abort": false, 00:18:05.778 "seek_hole": true, 00:18:05.778 "seek_data": true, 00:18:05.778 "copy": false, 00:18:05.778 "nvme_iov_md": false 00:18:05.778 }, 00:18:05.778 "driver_specific": { 00:18:05.778 "lvol": { 00:18:05.778 "lvol_store_uuid": "f97ef6d5-c1d6-4d91-9a85-545fcf30333b", 00:18:05.778 "base_bdev": "nvme0n1", 00:18:05.778 "thin_provision": true, 00:18:05.778 "num_allocated_clusters": 0, 00:18:05.778 "snapshot": false, 00:18:05.778 "clone": false, 00:18:05.778 "esnap_clone": false 00:18:05.778 } 00:18:05.778 } 00:18:05.778 } 00:18:05.778 ]' 00:18:05.778 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:05.778 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:05.778 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:05.778 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:05.778 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:05.778 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:05.778 00:49:57 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:05.778 00:49:57 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:05.778 00:49:57 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:06.039 00:49:57 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:06.039 00:49:57 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:06.039 00:49:57 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:06.039 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:06.039 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:06.039 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:06.039 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:06.039 00:49:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:06.300 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:06.300 { 00:18:06.300 "name": "93d361d1-d8a1-4af4-bbfd-4fd27b75174c", 00:18:06.300 "aliases": [ 00:18:06.300 "lvs/nvme0n1p0" 00:18:06.300 ], 00:18:06.300 "product_name": "Logical Volume", 00:18:06.300 "block_size": 4096, 00:18:06.300 "num_blocks": 26476544, 00:18:06.300 "uuid": "93d361d1-d8a1-4af4-bbfd-4fd27b75174c", 00:18:06.300 "assigned_rate_limits": { 00:18:06.300 "rw_ios_per_sec": 0, 00:18:06.300 "rw_mbytes_per_sec": 0, 00:18:06.300 "r_mbytes_per_sec": 0, 00:18:06.300 "w_mbytes_per_sec": 0 00:18:06.300 }, 00:18:06.300 "claimed": false, 00:18:06.300 "zoned": false, 00:18:06.300 "supported_io_types": { 00:18:06.300 "read": true, 00:18:06.300 "write": true, 00:18:06.300 "unmap": true, 00:18:06.300 "flush": false, 00:18:06.300 "reset": true, 00:18:06.300 "nvme_admin": false, 00:18:06.300 "nvme_io": false, 00:18:06.300 "nvme_io_md": false, 00:18:06.300 "write_zeroes": true, 00:18:06.300 "zcopy": false, 00:18:06.300 "get_zone_info": false, 00:18:06.300 "zone_management": false, 00:18:06.300 "zone_append": false, 00:18:06.300 "compare": false, 00:18:06.300 "compare_and_write": false, 00:18:06.300 "abort": false, 00:18:06.300 "seek_hole": true, 00:18:06.300 "seek_data": true, 00:18:06.300 "copy": false, 00:18:06.300 "nvme_iov_md": false 00:18:06.300 }, 00:18:06.300 "driver_specific": { 00:18:06.300 "lvol": { 00:18:06.300 "lvol_store_uuid": "f97ef6d5-c1d6-4d91-9a85-545fcf30333b", 00:18:06.300 "base_bdev": "nvme0n1", 00:18:06.300 "thin_provision": true, 00:18:06.300 "num_allocated_clusters": 0, 00:18:06.300 "snapshot": false, 00:18:06.300 "clone": false, 00:18:06.300 "esnap_clone": false 00:18:06.300 } 00:18:06.300 } 00:18:06.300 } 00:18:06.300 ]' 00:18:06.300 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:06.300 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:06.300 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:06.300 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:06.300 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:06.300 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:06.300 00:49:58 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:06.300 00:49:58 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:06.560 00:49:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:06.560 00:49:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:06.560 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:06.560 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:06.560 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:06.560 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:06.560 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93d361d1-d8a1-4af4-bbfd-4fd27b75174c 00:18:06.818 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:06.818 { 00:18:06.818 "name": "93d361d1-d8a1-4af4-bbfd-4fd27b75174c", 00:18:06.818 "aliases": [ 00:18:06.818 "lvs/nvme0n1p0" 00:18:06.818 ], 00:18:06.818 "product_name": "Logical Volume", 00:18:06.818 "block_size": 4096, 00:18:06.818 "num_blocks": 26476544, 00:18:06.818 "uuid": "93d361d1-d8a1-4af4-bbfd-4fd27b75174c", 00:18:06.818 "assigned_rate_limits": { 00:18:06.818 "rw_ios_per_sec": 0, 00:18:06.818 "rw_mbytes_per_sec": 0, 00:18:06.818 "r_mbytes_per_sec": 0, 00:18:06.818 "w_mbytes_per_sec": 0 00:18:06.818 }, 00:18:06.818 "claimed": false, 00:18:06.818 "zoned": false, 00:18:06.818 "supported_io_types": { 00:18:06.818 "read": true, 00:18:06.818 "write": true, 00:18:06.818 "unmap": true, 00:18:06.818 "flush": false, 00:18:06.818 "reset": true, 00:18:06.818 "nvme_admin": false, 00:18:06.818 "nvme_io": false, 00:18:06.818 "nvme_io_md": false, 00:18:06.818 "write_zeroes": true, 00:18:06.818 "zcopy": false, 00:18:06.818 "get_zone_info": false, 00:18:06.818 "zone_management": false, 00:18:06.818 "zone_append": false, 00:18:06.818 "compare": false, 00:18:06.818 "compare_and_write": false, 00:18:06.818 "abort": false, 00:18:06.818 "seek_hole": true, 00:18:06.818 "seek_data": true, 00:18:06.818 "copy": false, 00:18:06.818 "nvme_iov_md": false 00:18:06.818 }, 00:18:06.818 "driver_specific": { 00:18:06.818 "lvol": { 00:18:06.818 "lvol_store_uuid": "f97ef6d5-c1d6-4d91-9a85-545fcf30333b", 00:18:06.818 "base_bdev": "nvme0n1", 00:18:06.818 "thin_provision": true, 00:18:06.818 "num_allocated_clusters": 0, 00:18:06.818 "snapshot": false, 00:18:06.818 "clone": false, 00:18:06.818 "esnap_clone": false 00:18:06.818 } 00:18:06.818 } 00:18:06.818 } 00:18:06.818 ]' 00:18:06.818 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:06.818 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:06.818 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:06.818 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:06.818 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:06.818 00:49:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:06.818 00:49:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:06.818 00:49:58 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 93d361d1-d8a1-4af4-bbfd-4fd27b75174c --l2p_dram_limit 10' 00:18:06.818 00:49:58 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:06.818 00:49:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:06.818 00:49:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:06.818 00:49:58 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:06.818 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:06.818 00:49:58 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 93d361d1-d8a1-4af4-bbfd-4fd27b75174c --l2p_dram_limit 10 -c nvc0n1p0 00:18:07.078 [2024-11-17 00:49:58.976347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.976391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:07.078 [2024-11-17 00:49:58.976402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:07.078 [2024-11-17 00:49:58.976412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.976451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.976461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.078 [2024-11-17 00:49:58.976467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:07.078 [2024-11-17 00:49:58.976475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.976495] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:07.078 [2024-11-17 00:49:58.976719] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:07.078 [2024-11-17 00:49:58.976731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.976739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.078 [2024-11-17 00:49:58.976747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:07.078 [2024-11-17 00:49:58.976754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.976775] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 15826fcd-9a39-4e0b-88df-ff1adb33f8d8 00:18:07.078 [2024-11-17 00:49:58.977835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.977916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:07.078 [2024-11-17 00:49:58.977964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:07.078 [2024-11-17 00:49:58.977982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.982815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.982916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.078 [2024-11-17 00:49:58.983216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:18:07.078 [2024-11-17 00:49:58.983251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.983337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.983638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.078 [2024-11-17 00:49:58.983686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:07.078 [2024-11-17 00:49:58.983704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.983759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.983780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:07.078 [2024-11-17 00:49:58.983798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:07.078 [2024-11-17 00:49:58.983813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.983946] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.078 [2024-11-17 00:49:58.985266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.985380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.078 [2024-11-17 00:49:58.985394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:18:07.078 [2024-11-17 00:49:58.985401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.985427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.985435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:07.078 [2024-11-17 00:49:58.985441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:07.078 [2024-11-17 00:49:58.985450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.985463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:07.078 [2024-11-17 00:49:58.985584] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:07.078 [2024-11-17 00:49:58.985594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:07.078 [2024-11-17 00:49:58.985606] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:07.078 [2024-11-17 00:49:58.985614] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:07.078 [2024-11-17 00:49:58.985624] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:07.078 [2024-11-17 00:49:58.985630] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:07.078 [2024-11-17 00:49:58.985639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:07.078 [2024-11-17 00:49:58.985645] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:07.078 [2024-11-17 00:49:58.985652] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:07.078 [2024-11-17 00:49:58.985660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.985667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:07.078 [2024-11-17 00:49:58.985673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:18:07.078 [2024-11-17 00:49:58.985680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.985747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.078 [2024-11-17 00:49:58.985757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:07.078 [2024-11-17 00:49:58.985763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:07.078 [2024-11-17 00:49:58.985770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.078 [2024-11-17 00:49:58.985852] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:07.078 [2024-11-17 00:49:58.985863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:07.078 [2024-11-17 00:49:58.985869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.078 [2024-11-17 00:49:58.985879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.078 [2024-11-17 00:49:58.985885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:07.078 [2024-11-17 00:49:58.985892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:07.078 [2024-11-17 00:49:58.985898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:07.078 [2024-11-17 00:49:58.985905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:07.078 [2024-11-17 00:49:58.985910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:07.078 [2024-11-17 00:49:58.985916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.078 [2024-11-17 00:49:58.985921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:07.078 [2024-11-17 00:49:58.985927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:07.078 [2024-11-17 00:49:58.985933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.078 [2024-11-17 00:49:58.985941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:07.078 [2024-11-17 00:49:58.985946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:07.078 [2024-11-17 00:49:58.985953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.078 [2024-11-17 00:49:58.985959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:07.078 [2024-11-17 00:49:58.985967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:07.078 [2024-11-17 00:49:58.985973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.078 [2024-11-17 00:49:58.985981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:07.078 [2024-11-17 00:49:58.985987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:07.078 [2024-11-17 00:49:58.985995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.078 [2024-11-17 00:49:58.986002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:07.078 [2024-11-17 00:49:58.986009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:07.078 [2024-11-17 00:49:58.986015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.078 [2024-11-17 00:49:58.986023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:07.078 [2024-11-17 00:49:58.986028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:07.078 [2024-11-17 00:49:58.986036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.078 [2024-11-17 00:49:58.986042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:07.078 [2024-11-17 00:49:58.986051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:07.078 [2024-11-17 00:49:58.986056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.078 [2024-11-17 00:49:58.986065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:07.078 [2024-11-17 00:49:58.986071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:07.078 [2024-11-17 00:49:58.986078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.078 [2024-11-17 00:49:58.986084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:07.078 [2024-11-17 00:49:58.986091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:07.078 [2024-11-17 00:49:58.986097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.078 [2024-11-17 00:49:58.986104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:07.078 [2024-11-17 00:49:58.986110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:07.078 [2024-11-17 00:49:58.986117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.079 [2024-11-17 00:49:58.986123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:07.079 [2024-11-17 00:49:58.986130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:07.079 [2024-11-17 00:49:58.986136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.079 [2024-11-17 00:49:58.986144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:07.079 [2024-11-17 00:49:58.986150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:07.079 [2024-11-17 00:49:58.986159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.079 [2024-11-17 00:49:58.986166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.079 [2024-11-17 00:49:58.986174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:07.079 [2024-11-17 00:49:58.986180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:07.079 [2024-11-17 00:49:58.986187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:07.079 [2024-11-17 00:49:58.986194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:07.079 [2024-11-17 00:49:58.986201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:07.079 [2024-11-17 00:49:58.986206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:07.079 [2024-11-17 00:49:58.986217] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:07.079 [2024-11-17 00:49:58.986225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.079 [2024-11-17 00:49:58.986234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:07.079 [2024-11-17 00:49:58.986240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:07.079 [2024-11-17 00:49:58.986250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:07.079 [2024-11-17 00:49:58.986256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:07.079 [2024-11-17 00:49:58.986264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:07.079 [2024-11-17 00:49:58.986270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:07.079 [2024-11-17 00:49:58.986279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:07.079 [2024-11-17 00:49:58.986286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:07.079 [2024-11-17 00:49:58.986293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:07.079 [2024-11-17 00:49:58.986300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:07.079 [2024-11-17 00:49:58.986309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:07.079 [2024-11-17 00:49:58.986323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:07.079 [2024-11-17 00:49:58.986331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:07.079 [2024-11-17 00:49:58.986337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:07.079 [2024-11-17 00:49:58.986345] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:07.079 [2024-11-17 00:49:58.986367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.079 [2024-11-17 00:49:58.986376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:07.079 [2024-11-17 00:49:58.986383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:07.079 [2024-11-17 00:49:58.986390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:07.079 [2024-11-17 00:49:58.986398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:07.079 [2024-11-17 00:49:58.986406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.079 [2024-11-17 00:49:58.986413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:07.079 [2024-11-17 00:49:58.986423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:18:07.079 [2024-11-17 00:49:58.986429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.079 [2024-11-17 00:49:58.986460] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:07.079 [2024-11-17 00:49:58.986467] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:10.370 [2024-11-17 00:50:01.860632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.860705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:10.370 [2024-11-17 00:50:01.860728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2874.152 ms 00:18:10.370 [2024-11-17 00:50:01.860737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.871924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.871974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.370 [2024-11-17 00:50:01.871990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.081 ms 00:18:10.370 [2024-11-17 00:50:01.871998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.872119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.872130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.370 [2024-11-17 00:50:01.872145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:10.370 [2024-11-17 00:50:01.872154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.882751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.882796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.370 [2024-11-17 00:50:01.882810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.555 ms 00:18:10.370 [2024-11-17 00:50:01.882818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.882857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.882872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.370 [2024-11-17 00:50:01.882883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:10.370 [2024-11-17 00:50:01.882891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.883380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.883410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.370 [2024-11-17 00:50:01.883423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:18:10.370 [2024-11-17 00:50:01.883432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.883557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.883567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.370 [2024-11-17 00:50:01.883582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:10.370 [2024-11-17 00:50:01.883591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.909585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.909652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.370 [2024-11-17 00:50:01.909675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.963 ms 00:18:10.370 [2024-11-17 00:50:01.909689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:01.921237] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:10.370 [2024-11-17 00:50:01.925050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:01.925107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:10.370 [2024-11-17 00:50:01.925119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.192 ms 00:18:10.370 [2024-11-17 00:50:01.925129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:02.009769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:02.009841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:10.370 [2024-11-17 00:50:02.009855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.602 ms 00:18:10.370 [2024-11-17 00:50:02.009870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:02.010083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:02.010099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:10.370 [2024-11-17 00:50:02.010108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:18:10.370 [2024-11-17 00:50:02.010118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.370 [2024-11-17 00:50:02.016434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.370 [2024-11-17 00:50:02.016494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:10.370 [2024-11-17 00:50:02.016506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.253 ms 00:18:10.371 [2024-11-17 00:50:02.016517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.022018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.022075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:10.371 [2024-11-17 00:50:02.022087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.446 ms 00:18:10.371 [2024-11-17 00:50:02.022097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.022491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.022515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:10.371 [2024-11-17 00:50:02.022527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:18:10.371 [2024-11-17 00:50:02.022542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.065222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.065280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:10.371 [2024-11-17 00:50:02.065292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.656 ms 00:18:10.371 [2024-11-17 00:50:02.065304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.072165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.072219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:10.371 [2024-11-17 00:50:02.072231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:18:10.371 [2024-11-17 00:50:02.072242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.078100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.078155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:10.371 [2024-11-17 00:50:02.078165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.812 ms 00:18:10.371 [2024-11-17 00:50:02.078175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.084020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.084073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:10.371 [2024-11-17 00:50:02.084084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.799 ms 00:18:10.371 [2024-11-17 00:50:02.084097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.084149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.084161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:10.371 [2024-11-17 00:50:02.084170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:10.371 [2024-11-17 00:50:02.084181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.084254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.371 [2024-11-17 00:50:02.084267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:10.371 [2024-11-17 00:50:02.084276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:10.371 [2024-11-17 00:50:02.084294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.371 [2024-11-17 00:50:02.085432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3108.572 ms, result 0 00:18:10.371 { 00:18:10.371 "name": "ftl0", 00:18:10.371 "uuid": "15826fcd-9a39-4e0b-88df-ff1adb33f8d8" 00:18:10.371 } 00:18:10.371 00:50:02 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:10.371 00:50:02 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:10.371 00:50:02 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:10.371 00:50:02 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:10.634 [2024-11-17 00:50:02.531731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.531784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.634 [2024-11-17 00:50:02.531798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:10.634 [2024-11-17 00:50:02.531807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.531835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.634 [2024-11-17 00:50:02.532679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.532732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.634 [2024-11-17 00:50:02.532744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:18:10.634 [2024-11-17 00:50:02.532762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.533028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.533042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.634 [2024-11-17 00:50:02.533051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:10.634 [2024-11-17 00:50:02.533060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.536305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.536341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.634 [2024-11-17 00:50:02.536369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:18:10.634 [2024-11-17 00:50:02.536379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.542982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.543028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.634 [2024-11-17 00:50:02.543042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.585 ms 00:18:10.634 [2024-11-17 00:50:02.543054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.545567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.545625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.634 [2024-11-17 00:50:02.545636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:18:10.634 [2024-11-17 00:50:02.545646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.552131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.552194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.634 [2024-11-17 00:50:02.552206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.435 ms 00:18:10.634 [2024-11-17 00:50:02.552222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.552379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.552394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.634 [2024-11-17 00:50:02.552404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:10.634 [2024-11-17 00:50:02.552414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.555715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.555770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:10.634 [2024-11-17 00:50:02.555781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.278 ms 00:18:10.634 [2024-11-17 00:50:02.555791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.558725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.558783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:10.634 [2024-11-17 00:50:02.558793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:18:10.634 [2024-11-17 00:50:02.558804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.560919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.560976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.634 [2024-11-17 00:50:02.560987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:18:10.634 [2024-11-17 00:50:02.560996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.563034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.634 [2024-11-17 00:50:02.563092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.634 [2024-11-17 00:50:02.563103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:18:10.634 [2024-11-17 00:50:02.563112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.634 [2024-11-17 00:50:02.563206] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.634 [2024-11-17 00:50:02.563238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.634 [2024-11-17 00:50:02.563257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.634 [2024-11-17 00:50:02.563269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.634 [2024-11-17 00:50:02.563280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.634 [2024-11-17 00:50:02.563294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.634 [2024-11-17 00:50:02.563310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.563995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.635 [2024-11-17 00:50:02.564186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.636 [2024-11-17 00:50:02.564284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.636 [2024-11-17 00:50:02.564292] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15826fcd-9a39-4e0b-88df-ff1adb33f8d8 00:18:10.636 [2024-11-17 00:50:02.564302] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.636 [2024-11-17 00:50:02.564310] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.636 [2024-11-17 00:50:02.564320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.636 [2024-11-17 00:50:02.564328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.636 [2024-11-17 00:50:02.564337] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.636 [2024-11-17 00:50:02.564345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.636 [2024-11-17 00:50:02.564369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.636 [2024-11-17 00:50:02.564376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.636 [2024-11-17 00:50:02.564384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.636 [2024-11-17 00:50:02.564392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.636 [2024-11-17 00:50:02.564406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.636 [2024-11-17 00:50:02.564416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:18:10.636 [2024-11-17 00:50:02.564426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.566691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.636 [2024-11-17 00:50:02.566737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.636 [2024-11-17 00:50:02.566748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:18:10.636 [2024-11-17 00:50:02.566760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.566873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.636 [2024-11-17 00:50:02.566894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.636 [2024-11-17 00:50:02.566905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:10.636 [2024-11-17 00:50:02.566926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.575090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.575145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.636 [2024-11-17 00:50:02.575157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.575169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.575232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.575243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.636 [2024-11-17 00:50:02.575252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.575261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.575346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.575384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.636 [2024-11-17 00:50:02.575393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.575405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.575430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.575450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.636 [2024-11-17 00:50:02.575460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.575471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.588836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.588893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.636 [2024-11-17 00:50:02.588905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.588916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.599711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.599767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.636 [2024-11-17 00:50:02.599778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.599792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.599865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.599880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.636 [2024-11-17 00:50:02.599889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.599899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.599946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.599958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.636 [2024-11-17 00:50:02.599968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.599978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.600055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.600068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.636 [2024-11-17 00:50:02.600076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.600086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.600117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.600129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.636 [2024-11-17 00:50:02.600137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.600150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.600190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.600203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.636 [2024-11-17 00:50:02.600212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.600221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.600274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.636 [2024-11-17 00:50:02.600286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.636 [2024-11-17 00:50:02.600296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.636 [2024-11-17 00:50:02.600306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.636 [2024-11-17 00:50:02.600468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.701 ms, result 0 00:18:10.636 true 00:18:10.636 00:50:02 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86490 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86490 ']' 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86490 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86490 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86490' 00:18:10.636 killing process with pid 86490 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86490 00:18:10.636 00:50:02 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86490 00:18:15.928 00:50:07 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:20.134 262144+0 records in 00:18:20.134 262144+0 records out 00:18:20.134 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.22506 s, 254 MB/s 00:18:20.134 00:50:11 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:22.044 00:50:13 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:22.044 [2024-11-17 00:50:13.690278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:22.044 [2024-11-17 00:50:13.690414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86699 ] 00:18:22.044 [2024-11-17 00:50:13.841181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.044 [2024-11-17 00:50:13.893722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.044 [2024-11-17 00:50:14.008768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.044 [2024-11-17 00:50:14.008846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.307 [2024-11-17 00:50:14.170758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.170822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.307 [2024-11-17 00:50:14.170845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.307 [2024-11-17 00:50:14.170854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.170917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.170928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.307 [2024-11-17 00:50:14.170942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:22.307 [2024-11-17 00:50:14.170950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.170971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.307 [2024-11-17 00:50:14.171597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.307 [2024-11-17 00:50:14.171650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.171667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.307 [2024-11-17 00:50:14.171681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:18:22.307 [2024-11-17 00:50:14.171697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.173516] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.307 [2024-11-17 00:50:14.177325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.177394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.307 [2024-11-17 00:50:14.177405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.811 ms 00:18:22.307 [2024-11-17 00:50:14.177414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.177501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.177512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.307 [2024-11-17 00:50:14.177525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:22.307 [2024-11-17 00:50:14.177539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.185844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.185888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.307 [2024-11-17 00:50:14.185900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.261 ms 00:18:22.307 [2024-11-17 00:50:14.185917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.186025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.186036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.307 [2024-11-17 00:50:14.186046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:22.307 [2024-11-17 00:50:14.186055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.186115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.186127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.307 [2024-11-17 00:50:14.186135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:22.307 [2024-11-17 00:50:14.186143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.186172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.307 [2024-11-17 00:50:14.188180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.188228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.307 [2024-11-17 00:50:14.188239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:18:22.307 [2024-11-17 00:50:14.188246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.188280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.307 [2024-11-17 00:50:14.188289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.307 [2024-11-17 00:50:14.188297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:22.307 [2024-11-17 00:50:14.188305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.307 [2024-11-17 00:50:14.188326] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.307 [2024-11-17 00:50:14.188371] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.307 [2024-11-17 00:50:14.188409] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.307 [2024-11-17 00:50:14.188434] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:22.307 [2024-11-17 00:50:14.188541] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.307 [2024-11-17 00:50:14.188553] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.307 [2024-11-17 00:50:14.188565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.307 [2024-11-17 00:50:14.188591] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.307 [2024-11-17 00:50:14.188603] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.307 [2024-11-17 00:50:14.188612] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.307 [2024-11-17 00:50:14.188620] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.307 [2024-11-17 00:50:14.188631] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.307 [2024-11-17 00:50:14.188639] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.307 [2024-11-17 00:50:14.188647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.308 [2024-11-17 00:50:14.188655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.308 [2024-11-17 00:50:14.188665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:18:22.308 [2024-11-17 00:50:14.188673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.308 [2024-11-17 00:50:14.188760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.308 [2024-11-17 00:50:14.188772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.308 [2024-11-17 00:50:14.188779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:22.308 [2024-11-17 00:50:14.188791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.308 [2024-11-17 00:50:14.188887] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.308 [2024-11-17 00:50:14.188898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.308 [2024-11-17 00:50:14.188907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.308 [2024-11-17 00:50:14.188923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.308 [2024-11-17 00:50:14.188935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.308 [2024-11-17 00:50:14.188943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.308 [2024-11-17 00:50:14.188951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.308 [2024-11-17 00:50:14.188959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.308 [2024-11-17 00:50:14.188967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.308 [2024-11-17 00:50:14.188975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.308 [2024-11-17 00:50:14.188983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.308 [2024-11-17 00:50:14.188990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.308 [2024-11-17 00:50:14.189001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.308 [2024-11-17 00:50:14.189010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.308 [2024-11-17 00:50:14.189019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:22.308 [2024-11-17 00:50:14.189027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.308 [2024-11-17 00:50:14.189044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:22.308 [2024-11-17 00:50:14.189051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.308 [2024-11-17 00:50:14.189068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.308 [2024-11-17 00:50:14.189084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.308 [2024-11-17 00:50:14.189092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.308 [2024-11-17 00:50:14.189107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.308 [2024-11-17 00:50:14.189115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.308 [2024-11-17 00:50:14.189139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.308 [2024-11-17 00:50:14.189147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.308 [2024-11-17 00:50:14.189164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.308 [2024-11-17 00:50:14.189172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.308 [2024-11-17 00:50:14.189188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.308 [2024-11-17 00:50:14.189196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:22.308 [2024-11-17 00:50:14.189204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.308 [2024-11-17 00:50:14.189211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.308 [2024-11-17 00:50:14.189217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:22.308 [2024-11-17 00:50:14.189224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.308 [2024-11-17 00:50:14.189237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:22.308 [2024-11-17 00:50:14.189243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189250] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.308 [2024-11-17 00:50:14.189260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.308 [2024-11-17 00:50:14.189270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.308 [2024-11-17 00:50:14.189281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.308 [2024-11-17 00:50:14.189289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.308 [2024-11-17 00:50:14.189297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.308 [2024-11-17 00:50:14.189304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.308 [2024-11-17 00:50:14.189312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.308 [2024-11-17 00:50:14.189319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.308 [2024-11-17 00:50:14.189326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.308 [2024-11-17 00:50:14.189335] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.308 [2024-11-17 00:50:14.189346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.308 [2024-11-17 00:50:14.189372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.308 [2024-11-17 00:50:14.189380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:22.308 [2024-11-17 00:50:14.189387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:22.308 [2024-11-17 00:50:14.189395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:22.308 [2024-11-17 00:50:14.189403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:22.308 [2024-11-17 00:50:14.189414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:22.308 [2024-11-17 00:50:14.189422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:22.308 [2024-11-17 00:50:14.189431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:22.308 [2024-11-17 00:50:14.189439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:22.308 [2024-11-17 00:50:14.189447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:22.308 [2024-11-17 00:50:14.189455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:22.308 [2024-11-17 00:50:14.189465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:22.308 [2024-11-17 00:50:14.189473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:22.308 [2024-11-17 00:50:14.189479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:22.308 [2024-11-17 00:50:14.189487] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.308 [2024-11-17 00:50:14.189496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.308 [2024-11-17 00:50:14.189508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.308 [2024-11-17 00:50:14.189516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.308 [2024-11-17 00:50:14.189523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.308 [2024-11-17 00:50:14.189531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.308 [2024-11-17 00:50:14.189538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.308 [2024-11-17 00:50:14.189548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.308 [2024-11-17 00:50:14.189558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:18:22.308 [2024-11-17 00:50:14.189566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.308 [2024-11-17 00:50:14.211522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.308 [2024-11-17 00:50:14.211564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.308 [2024-11-17 00:50:14.211589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.913 ms 00:18:22.308 [2024-11-17 00:50:14.211598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.308 [2024-11-17 00:50:14.211696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.308 [2024-11-17 00:50:14.211706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.308 [2024-11-17 00:50:14.211715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:22.308 [2024-11-17 00:50:14.211727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.308 [2024-11-17 00:50:14.219919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.308 [2024-11-17 00:50:14.219953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.308 [2024-11-17 00:50:14.219963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.130 ms 00:18:22.308 [2024-11-17 00:50:14.219970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.220002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.220010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.309 [2024-11-17 00:50:14.220018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:22.309 [2024-11-17 00:50:14.220025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.220383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.220407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.309 [2024-11-17 00:50:14.220417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:18:22.309 [2024-11-17 00:50:14.220424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.220547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.220557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.309 [2024-11-17 00:50:14.220566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:22.309 [2024-11-17 00:50:14.220585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.225218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.225249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.309 [2024-11-17 00:50:14.225263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:18:22.309 [2024-11-17 00:50:14.225271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.228067] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:22.309 [2024-11-17 00:50:14.228104] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.309 [2024-11-17 00:50:14.228119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.228127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.309 [2024-11-17 00:50:14.228135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:18:22.309 [2024-11-17 00:50:14.228143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.242750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.242784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.309 [2024-11-17 00:50:14.242800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.569 ms 00:18:22.309 [2024-11-17 00:50:14.242809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.244930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.244962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.309 [2024-11-17 00:50:14.244971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.081 ms 00:18:22.309 [2024-11-17 00:50:14.244978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.246730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.246761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.309 [2024-11-17 00:50:14.246770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:18:22.309 [2024-11-17 00:50:14.246776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.247089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.247105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.309 [2024-11-17 00:50:14.247115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:18:22.309 [2024-11-17 00:50:14.247127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.264417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.264468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.309 [2024-11-17 00:50:14.264489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.274 ms 00:18:22.309 [2024-11-17 00:50:14.264502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.272070] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:22.309 [2024-11-17 00:50:14.274793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.274824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.309 [2024-11-17 00:50:14.274835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.247 ms 00:18:22.309 [2024-11-17 00:50:14.274843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.274924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.274937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.309 [2024-11-17 00:50:14.274947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.309 [2024-11-17 00:50:14.274959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.275030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.275040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.309 [2024-11-17 00:50:14.275048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:22.309 [2024-11-17 00:50:14.275056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.275079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.275086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.309 [2024-11-17 00:50:14.275094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.309 [2024-11-17 00:50:14.275101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.275130] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.309 [2024-11-17 00:50:14.275140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.275147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.309 [2024-11-17 00:50:14.275157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.309 [2024-11-17 00:50:14.275168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.279379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.279419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.309 [2024-11-17 00:50:14.279429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.195 ms 00:18:22.309 [2024-11-17 00:50:14.279436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.279506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.309 [2024-11-17 00:50:14.279515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.309 [2024-11-17 00:50:14.279528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:22.309 [2024-11-17 00:50:14.279535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.309 [2024-11-17 00:50:14.280646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.464 ms, result 0 00:18:23.250  [2024-11-17T00:50:16.687Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T00:50:17.622Z] Copying: 38/1024 [MB] (27 MBps) [2024-11-17T00:50:18.562Z] Copying: 62/1024 [MB] (24 MBps) [2024-11-17T00:50:19.496Z] Copying: 79/1024 [MB] (16 MBps) [2024-11-17T00:50:20.430Z] Copying: 107/1024 [MB] (27 MBps) [2024-11-17T00:50:21.372Z] Copying: 130/1024 [MB] (23 MBps) [2024-11-17T00:50:22.312Z] Copying: 151/1024 [MB] (20 MBps) [2024-11-17T00:50:23.690Z] Copying: 168/1024 [MB] (17 MBps) [2024-11-17T00:50:24.622Z] Copying: 192/1024 [MB] (24 MBps) [2024-11-17T00:50:25.554Z] Copying: 223/1024 [MB] (30 MBps) [2024-11-17T00:50:26.488Z] Copying: 253/1024 [MB] (29 MBps) [2024-11-17T00:50:27.429Z] Copying: 281/1024 [MB] (28 MBps) [2024-11-17T00:50:28.371Z] Copying: 315/1024 [MB] (34 MBps) [2024-11-17T00:50:29.365Z] Copying: 333/1024 [MB] (17 MBps) [2024-11-17T00:50:30.308Z] Copying: 354/1024 [MB] (21 MBps) [2024-11-17T00:50:31.694Z] Copying: 371/1024 [MB] (16 MBps) [2024-11-17T00:50:32.635Z] Copying: 386/1024 [MB] (15 MBps) [2024-11-17T00:50:33.579Z] Copying: 409/1024 [MB] (22 MBps) [2024-11-17T00:50:34.524Z] Copying: 431/1024 [MB] (22 MBps) [2024-11-17T00:50:35.534Z] Copying: 457/1024 [MB] (26 MBps) [2024-11-17T00:50:36.479Z] Copying: 480/1024 [MB] (23 MBps) [2024-11-17T00:50:37.422Z] Copying: 502/1024 [MB] (22 MBps) [2024-11-17T00:50:38.368Z] Copying: 525/1024 [MB] (22 MBps) [2024-11-17T00:50:39.317Z] Copying: 546/1024 [MB] (20 MBps) [2024-11-17T00:50:40.703Z] Copying: 565/1024 [MB] (19 MBps) [2024-11-17T00:50:41.646Z] Copying: 585/1024 [MB] (19 MBps) [2024-11-17T00:50:42.588Z] Copying: 606/1024 [MB] (20 MBps) [2024-11-17T00:50:43.533Z] Copying: 627/1024 [MB] (21 MBps) [2024-11-17T00:50:44.478Z] Copying: 650/1024 [MB] (22 MBps) [2024-11-17T00:50:45.420Z] Copying: 675/1024 [MB] (25 MBps) [2024-11-17T00:50:46.364Z] Copying: 694/1024 [MB] (19 MBps) [2024-11-17T00:50:47.307Z] Copying: 716/1024 [MB] (22 MBps) [2024-11-17T00:50:48.694Z] Copying: 734/1024 [MB] (17 MBps) [2024-11-17T00:50:49.636Z] Copying: 749/1024 [MB] (14 MBps) [2024-11-17T00:50:50.578Z] Copying: 763/1024 [MB] (13 MBps) [2024-11-17T00:50:51.522Z] Copying: 781/1024 [MB] (17 MBps) [2024-11-17T00:50:52.464Z] Copying: 791/1024 [MB] (10 MBps) [2024-11-17T00:50:53.408Z] Copying: 803/1024 [MB] (12 MBps) [2024-11-17T00:50:54.352Z] Copying: 815/1024 [MB] (11 MBps) [2024-11-17T00:50:55.298Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-17T00:50:56.685Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-17T00:50:57.631Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-17T00:50:58.575Z] Copying: 857/1024 [MB] (10 MBps) [2024-11-17T00:50:59.519Z] Copying: 876/1024 [MB] (18 MBps) [2024-11-17T00:51:00.463Z] Copying: 890/1024 [MB] (13 MBps) [2024-11-17T00:51:01.404Z] Copying: 900/1024 [MB] (10 MBps) [2024-11-17T00:51:02.347Z] Copying: 910/1024 [MB] (10 MBps) [2024-11-17T00:51:03.735Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-17T00:51:04.307Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-17T00:51:05.693Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-17T00:51:06.634Z] Copying: 951/1024 [MB] (10 MBps) [2024-11-17T00:51:07.576Z] Copying: 962/1024 [MB] (10 MBps) [2024-11-17T00:51:08.515Z] Copying: 973/1024 [MB] (10 MBps) [2024-11-17T00:51:09.459Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-17T00:51:10.402Z] Copying: 999/1024 [MB] (15 MBps) [2024-11-17T00:51:11.349Z] Copying: 1011/1024 [MB] (11 MBps) [2024-11-17T00:51:11.349Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-17 00:51:11.064728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.064787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:19.286 [2024-11-17 00:51:11.064803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:19.286 [2024-11-17 00:51:11.064812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.064834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:19.286 [2024-11-17 00:51:11.065657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.065699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:19.286 [2024-11-17 00:51:11.065721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:19:19.286 [2024-11-17 00:51:11.065734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.068671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.068719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:19.286 [2024-11-17 00:51:11.068730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:19:19.286 [2024-11-17 00:51:11.068738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.087954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.088015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:19.286 [2024-11-17 00:51:11.088027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.178 ms 00:19:19.286 [2024-11-17 00:51:11.088040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.094236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.094281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:19.286 [2024-11-17 00:51:11.094293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.151 ms 00:19:19.286 [2024-11-17 00:51:11.094300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.097130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.097184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:19.286 [2024-11-17 00:51:11.097195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:19:19.286 [2024-11-17 00:51:11.097202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.102094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.102162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:19.286 [2024-11-17 00:51:11.102172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.845 ms 00:19:19.286 [2024-11-17 00:51:11.102181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.102306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.102316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:19.286 [2024-11-17 00:51:11.102325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:19.286 [2024-11-17 00:51:11.102333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.105817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.105869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:19.286 [2024-11-17 00:51:11.105879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.467 ms 00:19:19.286 [2024-11-17 00:51:11.105887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.109077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.109142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:19.286 [2024-11-17 00:51:11.109152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:19:19.286 [2024-11-17 00:51:11.109159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.111691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.111742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:19.286 [2024-11-17 00:51:11.111752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:19:19.286 [2024-11-17 00:51:11.111761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.114317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.286 [2024-11-17 00:51:11.114383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:19.286 [2024-11-17 00:51:11.114393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:19:19.286 [2024-11-17 00:51:11.114399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.286 [2024-11-17 00:51:11.114441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:19.286 [2024-11-17 00:51:11.114457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:19.286 [2024-11-17 00:51:11.114657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.114999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:19.287 [2024-11-17 00:51:11.115219] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:19.287 [2024-11-17 00:51:11.115228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15826fcd-9a39-4e0b-88df-ff1adb33f8d8 00:19:19.287 [2024-11-17 00:51:11.115236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:19.287 [2024-11-17 00:51:11.115243] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:19.287 [2024-11-17 00:51:11.115251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:19.287 [2024-11-17 00:51:11.115266] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:19.287 [2024-11-17 00:51:11.115273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:19.287 [2024-11-17 00:51:11.115281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:19.287 [2024-11-17 00:51:11.115288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:19.287 [2024-11-17 00:51:11.115295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:19.287 [2024-11-17 00:51:11.115301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:19.287 [2024-11-17 00:51:11.115308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.287 [2024-11-17 00:51:11.115316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:19.287 [2024-11-17 00:51:11.115325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:19:19.287 [2024-11-17 00:51:11.115341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.287 [2024-11-17 00:51:11.117793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.287 [2024-11-17 00:51:11.117843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:19.288 [2024-11-17 00:51:11.117854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.406 ms 00:19:19.288 [2024-11-17 00:51:11.117863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.117988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.288 [2024-11-17 00:51:11.117998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:19.288 [2024-11-17 00:51:11.118012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:19.288 [2024-11-17 00:51:11.118022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.125101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.125154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.288 [2024-11-17 00:51:11.125165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.125172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.125241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.125250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.288 [2024-11-17 00:51:11.125262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.125271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.125338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.125349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.288 [2024-11-17 00:51:11.125395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.125403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.125419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.125427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.288 [2024-11-17 00:51:11.125435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.125447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.139059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.139112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.288 [2024-11-17 00:51:11.139123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.139131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.149323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.149439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.288 [2024-11-17 00:51:11.149458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.149466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.149516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.149525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.288 [2024-11-17 00:51:11.149534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.149542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.149583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.149592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.288 [2024-11-17 00:51:11.149600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.149607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.149677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.149687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.288 [2024-11-17 00:51:11.149694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.149701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.149728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.149738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:19.288 [2024-11-17 00:51:11.149746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.149753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.149794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.149810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.288 [2024-11-17 00:51:11.149819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.149826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.149870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.288 [2024-11-17 00:51:11.149919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.288 [2024-11-17 00:51:11.149927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.288 [2024-11-17 00:51:11.149936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.288 [2024-11-17 00:51:11.150067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.306 ms, result 0 00:19:19.549 00:19:19.549 00:19:19.810 00:51:11 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:19.810 [2024-11-17 00:51:11.693836] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:19.810 [2024-11-17 00:51:11.693988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87299 ] 00:19:19.810 [2024-11-17 00:51:11.844911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.072 [2024-11-17 00:51:11.918213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.072 [2024-11-17 00:51:12.068060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.072 [2024-11-17 00:51:12.068155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.335 [2024-11-17 00:51:12.232315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.232392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.335 [2024-11-17 00:51:12.232413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.335 [2024-11-17 00:51:12.232423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.232487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.232499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.335 [2024-11-17 00:51:12.232508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:20.335 [2024-11-17 00:51:12.232517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.232540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.335 [2024-11-17 00:51:12.232866] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.335 [2024-11-17 00:51:12.232911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.232925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.335 [2024-11-17 00:51:12.232938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:19:20.335 [2024-11-17 00:51:12.232950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.235263] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:20.335 [2024-11-17 00:51:12.240073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.240135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:20.335 [2024-11-17 00:51:12.240148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:19:20.335 [2024-11-17 00:51:12.240157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.240248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.240260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:20.335 [2024-11-17 00:51:12.240273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:20.335 [2024-11-17 00:51:12.240281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.251958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.252002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.335 [2024-11-17 00:51:12.252016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.632 ms 00:19:20.335 [2024-11-17 00:51:12.252033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.252150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.252166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.335 [2024-11-17 00:51:12.252179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:20.335 [2024-11-17 00:51:12.252194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.252261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.252273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.335 [2024-11-17 00:51:12.252287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:20.335 [2024-11-17 00:51:12.252297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.252324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.335 [2024-11-17 00:51:12.255098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.255142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.335 [2024-11-17 00:51:12.255153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:19:20.335 [2024-11-17 00:51:12.255163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.255202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.255212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.335 [2024-11-17 00:51:12.255222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:20.335 [2024-11-17 00:51:12.255232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.255258] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:20.335 [2024-11-17 00:51:12.255292] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:20.335 [2024-11-17 00:51:12.255338] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:20.335 [2024-11-17 00:51:12.255376] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:20.335 [2024-11-17 00:51:12.255491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.335 [2024-11-17 00:51:12.255504] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.335 [2024-11-17 00:51:12.255523] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.335 [2024-11-17 00:51:12.255534] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.335 [2024-11-17 00:51:12.255548] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.335 [2024-11-17 00:51:12.255558] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:20.335 [2024-11-17 00:51:12.255569] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.335 [2024-11-17 00:51:12.255578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.335 [2024-11-17 00:51:12.255590] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.335 [2024-11-17 00:51:12.255600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.255608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.335 [2024-11-17 00:51:12.255617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:19:20.335 [2024-11-17 00:51:12.255626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.335 [2024-11-17 00:51:12.255713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.335 [2024-11-17 00:51:12.255726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.335 [2024-11-17 00:51:12.255734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:20.336 [2024-11-17 00:51:12.255742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.336 [2024-11-17 00:51:12.255847] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.336 [2024-11-17 00:51:12.255869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.336 [2024-11-17 00:51:12.255878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.336 [2024-11-17 00:51:12.255896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.336 [2024-11-17 00:51:12.255909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.336 [2024-11-17 00:51:12.255917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.336 [2024-11-17 00:51:12.255925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:20.336 [2024-11-17 00:51:12.255934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.336 [2024-11-17 00:51:12.255941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:20.336 [2024-11-17 00:51:12.255950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.336 [2024-11-17 00:51:12.255958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.336 [2024-11-17 00:51:12.255965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:20.336 [2024-11-17 00:51:12.255975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.336 [2024-11-17 00:51:12.255983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.336 [2024-11-17 00:51:12.255992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:20.336 [2024-11-17 00:51:12.256000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.336 [2024-11-17 00:51:12.256016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:20.336 [2024-11-17 00:51:12.256024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.336 [2024-11-17 00:51:12.256038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.336 [2024-11-17 00:51:12.256053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.336 [2024-11-17 00:51:12.256061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.336 [2024-11-17 00:51:12.256076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.336 [2024-11-17 00:51:12.256084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.336 [2024-11-17 00:51:12.256110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.336 [2024-11-17 00:51:12.256118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.336 [2024-11-17 00:51:12.256133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.336 [2024-11-17 00:51:12.256140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.336 [2024-11-17 00:51:12.256153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.336 [2024-11-17 00:51:12.256161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:20.336 [2024-11-17 00:51:12.256168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.336 [2024-11-17 00:51:12.256175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.336 [2024-11-17 00:51:12.256182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:20.336 [2024-11-17 00:51:12.256189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.336 [2024-11-17 00:51:12.256203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:20.336 [2024-11-17 00:51:12.256209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256216] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.336 [2024-11-17 00:51:12.256227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.336 [2024-11-17 00:51:12.256236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.336 [2024-11-17 00:51:12.256250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.336 [2024-11-17 00:51:12.256259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.336 [2024-11-17 00:51:12.256267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.336 [2024-11-17 00:51:12.256274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.336 [2024-11-17 00:51:12.256281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.336 [2024-11-17 00:51:12.256289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.336 [2024-11-17 00:51:12.256296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.336 [2024-11-17 00:51:12.256305] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.336 [2024-11-17 00:51:12.256315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.336 [2024-11-17 00:51:12.256324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:20.336 [2024-11-17 00:51:12.256333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:20.336 [2024-11-17 00:51:12.256341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:20.336 [2024-11-17 00:51:12.256348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:20.336 [2024-11-17 00:51:12.256379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:20.336 [2024-11-17 00:51:12.256394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:20.336 [2024-11-17 00:51:12.256403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:20.336 [2024-11-17 00:51:12.256411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:20.336 [2024-11-17 00:51:12.256420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:20.336 [2024-11-17 00:51:12.256428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:20.336 [2024-11-17 00:51:12.256436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:20.336 [2024-11-17 00:51:12.256444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:20.336 [2024-11-17 00:51:12.256451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:20.336 [2024-11-17 00:51:12.256460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:20.336 [2024-11-17 00:51:12.256468] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.336 [2024-11-17 00:51:12.256478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.336 [2024-11-17 00:51:12.256487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.336 [2024-11-17 00:51:12.256495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.336 [2024-11-17 00:51:12.256503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.336 [2024-11-17 00:51:12.256511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.336 [2024-11-17 00:51:12.256518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.336 [2024-11-17 00:51:12.256534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.336 [2024-11-17 00:51:12.256543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:19:20.336 [2024-11-17 00:51:12.256554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.336 [2024-11-17 00:51:12.284653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.336 [2024-11-17 00:51:12.284718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.336 [2024-11-17 00:51:12.284752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.002 ms 00:19:20.336 [2024-11-17 00:51:12.284769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.336 [2024-11-17 00:51:12.284901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.336 [2024-11-17 00:51:12.284916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.336 [2024-11-17 00:51:12.284928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:20.336 [2024-11-17 00:51:12.284939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.336 [2024-11-17 00:51:12.301186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.336 [2024-11-17 00:51:12.301238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.336 [2024-11-17 00:51:12.301251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.166 ms 00:19:20.336 [2024-11-17 00:51:12.301262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.336 [2024-11-17 00:51:12.301304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.336 [2024-11-17 00:51:12.301314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.336 [2024-11-17 00:51:12.301325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.336 [2024-11-17 00:51:12.301334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.336 [2024-11-17 00:51:12.302073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.336 [2024-11-17 00:51:12.302123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.337 [2024-11-17 00:51:12.302135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:19:20.337 [2024-11-17 00:51:12.302144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.302318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.302330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.337 [2024-11-17 00:51:12.302340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:20.337 [2024-11-17 00:51:12.302371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.312105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.312154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.337 [2024-11-17 00:51:12.312174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.709 ms 00:19:20.337 [2024-11-17 00:51:12.312183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.317144] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:20.337 [2024-11-17 00:51:12.317203] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.337 [2024-11-17 00:51:12.317217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.317228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.337 [2024-11-17 00:51:12.317239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.914 ms 00:19:20.337 [2024-11-17 00:51:12.317247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.333855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.333912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.337 [2024-11-17 00:51:12.333927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.549 ms 00:19:20.337 [2024-11-17 00:51:12.333937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.336964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.337014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.337 [2024-11-17 00:51:12.337025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:19:20.337 [2024-11-17 00:51:12.337033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.339716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.339762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.337 [2024-11-17 00:51:12.339773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:19:20.337 [2024-11-17 00:51:12.339781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.340146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.340169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.337 [2024-11-17 00:51:12.340181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:19:20.337 [2024-11-17 00:51:12.340198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.371896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.371964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.337 [2024-11-17 00:51:12.371978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.670 ms 00:19:20.337 [2024-11-17 00:51:12.371986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.381245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:20.337 [2024-11-17 00:51:12.385596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.385646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.337 [2024-11-17 00:51:12.385672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.552 ms 00:19:20.337 [2024-11-17 00:51:12.385685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.385785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.385798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.337 [2024-11-17 00:51:12.385809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:20.337 [2024-11-17 00:51:12.385825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.385902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.385915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.337 [2024-11-17 00:51:12.385930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:20.337 [2024-11-17 00:51:12.385941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.385963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.385972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.337 [2024-11-17 00:51:12.385981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.337 [2024-11-17 00:51:12.385995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.386041] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.337 [2024-11-17 00:51:12.386055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.386064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.337 [2024-11-17 00:51:12.386073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:20.337 [2024-11-17 00:51:12.386081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.392939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.392992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.337 [2024-11-17 00:51:12.393005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.834 ms 00:19:20.337 [2024-11-17 00:51:12.393014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.393121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.337 [2024-11-17 00:51:12.393133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.337 [2024-11-17 00:51:12.393143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:20.337 [2024-11-17 00:51:12.393157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.337 [2024-11-17 00:51:12.394608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.717 ms, result 0 00:19:21.723  [2024-11-17T00:51:14.730Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-17T00:51:15.674Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-17T00:51:16.619Z] Copying: 34/1024 [MB] (11 MBps) [2024-11-17T00:51:18.009Z] Copying: 47/1024 [MB] (12 MBps) [2024-11-17T00:51:18.583Z] Copying: 58/1024 [MB] (11 MBps) [2024-11-17T00:51:19.969Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-17T00:51:20.912Z] Copying: 80/1024 [MB] (10 MBps) [2024-11-17T00:51:21.880Z] Copying: 91/1024 [MB] (11 MBps) [2024-11-17T00:51:22.906Z] Copying: 103/1024 [MB] (11 MBps) [2024-11-17T00:51:23.849Z] Copying: 115/1024 [MB] (11 MBps) [2024-11-17T00:51:24.793Z] Copying: 127/1024 [MB] (11 MBps) [2024-11-17T00:51:25.736Z] Copying: 138/1024 [MB] (11 MBps) [2024-11-17T00:51:26.683Z] Copying: 150/1024 [MB] (11 MBps) [2024-11-17T00:51:27.628Z] Copying: 161/1024 [MB] (11 MBps) [2024-11-17T00:51:29.018Z] Copying: 181/1024 [MB] (19 MBps) [2024-11-17T00:51:29.591Z] Copying: 192/1024 [MB] (10 MBps) [2024-11-17T00:51:30.979Z] Copying: 203/1024 [MB] (10 MBps) [2024-11-17T00:51:31.927Z] Copying: 217/1024 [MB] (13 MBps) [2024-11-17T00:51:32.870Z] Copying: 236/1024 [MB] (19 MBps) [2024-11-17T00:51:33.812Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-17T00:51:34.756Z] Copying: 259/1024 [MB] (11 MBps) [2024-11-17T00:51:35.699Z] Copying: 271/1024 [MB] (11 MBps) [2024-11-17T00:51:36.646Z] Copying: 283/1024 [MB] (11 MBps) [2024-11-17T00:51:37.589Z] Copying: 295/1024 [MB] (12 MBps) [2024-11-17T00:51:38.976Z] Copying: 307/1024 [MB] (12 MBps) [2024-11-17T00:51:39.919Z] Copying: 319/1024 [MB] (11 MBps) [2024-11-17T00:51:40.863Z] Copying: 331/1024 [MB] (11 MBps) [2024-11-17T00:51:41.808Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-17T00:51:42.749Z] Copying: 353/1024 [MB] (10 MBps) [2024-11-17T00:51:43.690Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-17T00:51:44.632Z] Copying: 379/1024 [MB] (15 MBps) [2024-11-17T00:51:46.016Z] Copying: 395/1024 [MB] (16 MBps) [2024-11-17T00:51:46.588Z] Copying: 406/1024 [MB] (10 MBps) [2024-11-17T00:51:48.027Z] Copying: 422/1024 [MB] (16 MBps) [2024-11-17T00:51:48.598Z] Copying: 439/1024 [MB] (17 MBps) [2024-11-17T00:51:49.983Z] Copying: 454/1024 [MB] (14 MBps) [2024-11-17T00:51:50.923Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-17T00:51:51.864Z] Copying: 480/1024 [MB] (14 MBps) [2024-11-17T00:51:52.804Z] Copying: 499/1024 [MB] (19 MBps) [2024-11-17T00:51:53.747Z] Copying: 515/1024 [MB] (16 MBps) [2024-11-17T00:51:54.688Z] Copying: 538/1024 [MB] (22 MBps) [2024-11-17T00:51:55.632Z] Copying: 551/1024 [MB] (12 MBps) [2024-11-17T00:51:57.020Z] Copying: 563/1024 [MB] (12 MBps) [2024-11-17T00:51:57.591Z] Copying: 581/1024 [MB] (17 MBps) [2024-11-17T00:51:58.977Z] Copying: 598/1024 [MB] (17 MBps) [2024-11-17T00:51:59.920Z] Copying: 618/1024 [MB] (19 MBps) [2024-11-17T00:52:00.862Z] Copying: 632/1024 [MB] (14 MBps) [2024-11-17T00:52:01.803Z] Copying: 642/1024 [MB] (10 MBps) [2024-11-17T00:52:02.747Z] Copying: 653/1024 [MB] (10 MBps) [2024-11-17T00:52:03.689Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-17T00:52:04.632Z] Copying: 674/1024 [MB] (10 MBps) [2024-11-17T00:52:06.020Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-17T00:52:06.594Z] Copying: 696/1024 [MB] (11 MBps) [2024-11-17T00:52:07.980Z] Copying: 706/1024 [MB] (10 MBps) [2024-11-17T00:52:08.924Z] Copying: 717/1024 [MB] (10 MBps) [2024-11-17T00:52:09.868Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-17T00:52:10.810Z] Copying: 739/1024 [MB] (12 MBps) [2024-11-17T00:52:11.755Z] Copying: 760/1024 [MB] (20 MBps) [2024-11-17T00:52:12.696Z] Copying: 781/1024 [MB] (21 MBps) [2024-11-17T00:52:13.641Z] Copying: 801/1024 [MB] (19 MBps) [2024-11-17T00:52:14.583Z] Copying: 821/1024 [MB] (20 MBps) [2024-11-17T00:52:15.971Z] Copying: 835/1024 [MB] (14 MBps) [2024-11-17T00:52:16.915Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-17T00:52:17.860Z] Copying: 857/1024 [MB] (10 MBps) [2024-11-17T00:52:18.803Z] Copying: 875/1024 [MB] (17 MBps) [2024-11-17T00:52:19.749Z] Copying: 897/1024 [MB] (22 MBps) [2024-11-17T00:52:20.718Z] Copying: 916/1024 [MB] (18 MBps) [2024-11-17T00:52:21.674Z] Copying: 932/1024 [MB] (16 MBps) [2024-11-17T00:52:22.620Z] Copying: 944/1024 [MB] (11 MBps) [2024-11-17T00:52:24.007Z] Copying: 961/1024 [MB] (17 MBps) [2024-11-17T00:52:24.952Z] Copying: 971/1024 [MB] (10 MBps) [2024-11-17T00:52:25.895Z] Copying: 982/1024 [MB] (10 MBps) [2024-11-17T00:52:26.838Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-17T00:52:27.409Z] Copying: 1012/1024 [MB] (14 MBps) [2024-11-17T00:52:27.671Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-17 00:52:27.534076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.534161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:35.608 [2024-11-17 00:52:27.534185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:35.608 [2024-11-17 00:52:27.534209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.534251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.608 [2024-11-17 00:52:27.534988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.535031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:35.608 [2024-11-17 00:52:27.535049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:20:35.608 [2024-11-17 00:52:27.535064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.535461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.535487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:35.608 [2024-11-17 00:52:27.535503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:20:35.608 [2024-11-17 00:52:27.535517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.541658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.541693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:35.608 [2024-11-17 00:52:27.541703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:20:35.608 [2024-11-17 00:52:27.541715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.549668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.549699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:35.608 [2024-11-17 00:52:27.549755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.936 ms 00:20:35.608 [2024-11-17 00:52:27.549764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.552297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.552334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:35.608 [2024-11-17 00:52:27.552344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 00:20:35.608 [2024-11-17 00:52:27.552364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.557047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.557163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:35.608 [2024-11-17 00:52:27.557196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:20:35.608 [2024-11-17 00:52:27.557217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.557596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.557678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:35.608 [2024-11-17 00:52:27.557703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:35.608 [2024-11-17 00:52:27.557723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.560871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.560952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:35.608 [2024-11-17 00:52:27.560977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:20:35.608 [2024-11-17 00:52:27.560996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.563525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.563599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:35.608 [2024-11-17 00:52:27.563621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:20:35.608 [2024-11-17 00:52:27.563640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.565803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.565883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:35.608 [2024-11-17 00:52:27.565899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:20:35.608 [2024-11-17 00:52:27.565912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.567881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.608 [2024-11-17 00:52:27.567941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:35.608 [2024-11-17 00:52:27.567958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:20:35.608 [2024-11-17 00:52:27.567970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.608 [2024-11-17 00:52:27.568020] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:35.608 [2024-11-17 00:52:27.568060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:35.608 [2024-11-17 00:52:27.568872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.568896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.568912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.568938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.568974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.568990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:35.609 [2024-11-17 00:52:27.569791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:35.609 [2024-11-17 00:52:27.569800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15826fcd-9a39-4e0b-88df-ff1adb33f8d8 00:20:35.609 [2024-11-17 00:52:27.569808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:35.609 [2024-11-17 00:52:27.569816] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:35.609 [2024-11-17 00:52:27.569829] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:35.609 [2024-11-17 00:52:27.569840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:35.609 [2024-11-17 00:52:27.569847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:35.609 [2024-11-17 00:52:27.569858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:35.609 [2024-11-17 00:52:27.569868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:35.609 [2024-11-17 00:52:27.569874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:35.609 [2024-11-17 00:52:27.569881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:35.609 [2024-11-17 00:52:27.569890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.609 [2024-11-17 00:52:27.569901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:35.609 [2024-11-17 00:52:27.569922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:20:35.609 [2024-11-17 00:52:27.569932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.609 [2024-11-17 00:52:27.571552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.609 [2024-11-17 00:52:27.571586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:35.609 [2024-11-17 00:52:27.571595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:20:35.609 [2024-11-17 00:52:27.571603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.609 [2024-11-17 00:52:27.571701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.609 [2024-11-17 00:52:27.571714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:35.609 [2024-11-17 00:52:27.571722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:35.610 [2024-11-17 00:52:27.571730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.576526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.576562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.610 [2024-11-17 00:52:27.576571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.576585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.576652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.576664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.610 [2024-11-17 00:52:27.576672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.576683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.576720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.576729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.610 [2024-11-17 00:52:27.576737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.576745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.576760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.576767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.610 [2024-11-17 00:52:27.576777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.576785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.586436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.586476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.610 [2024-11-17 00:52:27.586486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.586493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.594335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.610 [2024-11-17 00:52:27.594350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.594370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.594426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.610 [2024-11-17 00:52:27.594435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.594443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.594475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.610 [2024-11-17 00:52:27.594483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.594493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.594576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.610 [2024-11-17 00:52:27.594583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.594592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.594648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:35.610 [2024-11-17 00:52:27.594661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.594673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.594734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.610 [2024-11-17 00:52:27.594747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.594757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.610 [2024-11-17 00:52:27.594821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.610 [2024-11-17 00:52:27.594830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.610 [2024-11-17 00:52:27.594840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.610 [2024-11-17 00:52:27.594972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.877 ms, result 0 00:20:35.871 00:20:35.871 00:20:35.871 00:52:27 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:38.421 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:38.421 00:52:30 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:38.421 [2024-11-17 00:52:30.102335] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:38.421 [2024-11-17 00:52:30.102508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88115 ] 00:20:38.421 [2024-11-17 00:52:30.254136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.421 [2024-11-17 00:52:30.303437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.421 [2024-11-17 00:52:30.412386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.421 [2024-11-17 00:52:30.412466] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.685 [2024-11-17 00:52:30.573737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.573802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:38.685 [2024-11-17 00:52:30.573820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:38.685 [2024-11-17 00:52:30.573833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.573891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.573903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.685 [2024-11-17 00:52:30.573911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:38.685 [2024-11-17 00:52:30.573919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.573945] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:38.685 [2024-11-17 00:52:30.574378] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:38.685 [2024-11-17 00:52:30.574423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.574432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.685 [2024-11-17 00:52:30.574445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:20:38.685 [2024-11-17 00:52:30.574457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.576209] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:38.685 [2024-11-17 00:52:30.580076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.580130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:38.685 [2024-11-17 00:52:30.580151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.871 ms 00:20:38.685 [2024-11-17 00:52:30.580159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.580242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.580252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:38.685 [2024-11-17 00:52:30.580265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:38.685 [2024-11-17 00:52:30.580273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.588554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.588597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.685 [2024-11-17 00:52:30.588608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.233 ms 00:20:38.685 [2024-11-17 00:52:30.588629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.588750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.588761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.685 [2024-11-17 00:52:30.588770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:38.685 [2024-11-17 00:52:30.588778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.588840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.588851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:38.685 [2024-11-17 00:52:30.588863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:38.685 [2024-11-17 00:52:30.588871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.588898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.685 [2024-11-17 00:52:30.590871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.590908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.685 [2024-11-17 00:52:30.590918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.982 ms 00:20:38.685 [2024-11-17 00:52:30.590926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.590959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.590969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:38.685 [2024-11-17 00:52:30.590977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:38.685 [2024-11-17 00:52:30.590994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.591017] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:38.685 [2024-11-17 00:52:30.591044] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:38.685 [2024-11-17 00:52:30.591085] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:38.685 [2024-11-17 00:52:30.591105] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:38.685 [2024-11-17 00:52:30.591213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:38.685 [2024-11-17 00:52:30.591224] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:38.685 [2024-11-17 00:52:30.591235] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:38.685 [2024-11-17 00:52:30.591245] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:38.685 [2024-11-17 00:52:30.591258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:38.685 [2024-11-17 00:52:30.591266] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:38.685 [2024-11-17 00:52:30.591274] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:38.685 [2024-11-17 00:52:30.591281] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:38.685 [2024-11-17 00:52:30.591288] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:38.685 [2024-11-17 00:52:30.591296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.591304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:38.685 [2024-11-17 00:52:30.591312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:38.685 [2024-11-17 00:52:30.591321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.591423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.685 [2024-11-17 00:52:30.591436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:38.685 [2024-11-17 00:52:30.591443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:38.685 [2024-11-17 00:52:30.591451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.685 [2024-11-17 00:52:30.591551] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:38.685 [2024-11-17 00:52:30.591563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:38.685 [2024-11-17 00:52:30.591573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.685 [2024-11-17 00:52:30.591588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.685 [2024-11-17 00:52:30.591598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:38.685 [2024-11-17 00:52:30.591607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:38.685 [2024-11-17 00:52:30.591615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:38.685 [2024-11-17 00:52:30.591624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:38.685 [2024-11-17 00:52:30.591632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:38.685 [2024-11-17 00:52:30.591640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.685 [2024-11-17 00:52:30.591648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:38.685 [2024-11-17 00:52:30.591656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:38.685 [2024-11-17 00:52:30.591667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.686 [2024-11-17 00:52:30.591675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:38.686 [2024-11-17 00:52:30.591683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:38.686 [2024-11-17 00:52:30.591695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:38.686 [2024-11-17 00:52:30.591712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:38.686 [2024-11-17 00:52:30.591720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:38.686 [2024-11-17 00:52:30.591737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.686 [2024-11-17 00:52:30.591754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:38.686 [2024-11-17 00:52:30.591762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.686 [2024-11-17 00:52:30.591778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:38.686 [2024-11-17 00:52:30.591785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.686 [2024-11-17 00:52:30.591806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:38.686 [2024-11-17 00:52:30.591813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.686 [2024-11-17 00:52:30.591829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:38.686 [2024-11-17 00:52:30.591837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.686 [2024-11-17 00:52:30.591852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:38.686 [2024-11-17 00:52:30.591859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:38.686 [2024-11-17 00:52:30.591867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.686 [2024-11-17 00:52:30.591875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:38.686 [2024-11-17 00:52:30.591882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:38.686 [2024-11-17 00:52:30.591890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:38.686 [2024-11-17 00:52:30.591905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:38.686 [2024-11-17 00:52:30.591913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591920] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:38.686 [2024-11-17 00:52:30.591932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:38.686 [2024-11-17 00:52:30.591941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.686 [2024-11-17 00:52:30.591954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.686 [2024-11-17 00:52:30.591964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:38.686 [2024-11-17 00:52:30.591973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:38.686 [2024-11-17 00:52:30.591981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:38.686 [2024-11-17 00:52:30.591989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:38.686 [2024-11-17 00:52:30.591997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:38.686 [2024-11-17 00:52:30.592006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:38.686 [2024-11-17 00:52:30.592016] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:38.686 [2024-11-17 00:52:30.592031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.686 [2024-11-17 00:52:30.592041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:38.686 [2024-11-17 00:52:30.592050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:38.686 [2024-11-17 00:52:30.592057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:38.686 [2024-11-17 00:52:30.592064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:38.686 [2024-11-17 00:52:30.592072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:38.686 [2024-11-17 00:52:30.592081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:38.686 [2024-11-17 00:52:30.592088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:38.686 [2024-11-17 00:52:30.592096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:38.686 [2024-11-17 00:52:30.592104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:38.686 [2024-11-17 00:52:30.592111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:38.686 [2024-11-17 00:52:30.592118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:38.686 [2024-11-17 00:52:30.592125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:38.686 [2024-11-17 00:52:30.592132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:38.686 [2024-11-17 00:52:30.592139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:38.686 [2024-11-17 00:52:30.592147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:38.686 [2024-11-17 00:52:30.592155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.686 [2024-11-17 00:52:30.592163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:38.686 [2024-11-17 00:52:30.592170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:38.686 [2024-11-17 00:52:30.592177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:38.686 [2024-11-17 00:52:30.592184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:38.686 [2024-11-17 00:52:30.592191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.592202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:38.686 [2024-11-17 00:52:30.592210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:20:38.686 [2024-11-17 00:52:30.592218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.615213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.615276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.686 [2024-11-17 00:52:30.615296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.949 ms 00:20:38.686 [2024-11-17 00:52:30.615311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.615428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.615439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:38.686 [2024-11-17 00:52:30.615448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:38.686 [2024-11-17 00:52:30.615457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.627272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.627324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.686 [2024-11-17 00:52:30.627337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.750 ms 00:20:38.686 [2024-11-17 00:52:30.627346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.627408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.627419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.686 [2024-11-17 00:52:30.627429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:38.686 [2024-11-17 00:52:30.627438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.628004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.628053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.686 [2024-11-17 00:52:30.628066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:20:38.686 [2024-11-17 00:52:30.628076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.628244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.628260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.686 [2024-11-17 00:52:30.628271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:20:38.686 [2024-11-17 00:52:30.628282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.635014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.635060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.686 [2024-11-17 00:52:30.635077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.704 ms 00:20:38.686 [2024-11-17 00:52:30.635085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.686 [2024-11-17 00:52:30.638825] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:38.686 [2024-11-17 00:52:30.638877] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:38.686 [2024-11-17 00:52:30.638889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.686 [2024-11-17 00:52:30.638898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:38.686 [2024-11-17 00:52:30.638907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.715 ms 00:20:38.687 [2024-11-17 00:52:30.638914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.654656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.654713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:38.687 [2024-11-17 00:52:30.654728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:20:38.687 [2024-11-17 00:52:30.654737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.657606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.657651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:38.687 [2024-11-17 00:52:30.657660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:20:38.687 [2024-11-17 00:52:30.657668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.660104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.660147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:38.687 [2024-11-17 00:52:30.660156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:20:38.687 [2024-11-17 00:52:30.660164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.660533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.660564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:38.687 [2024-11-17 00:52:30.660575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:38.687 [2024-11-17 00:52:30.660583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.683655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.683725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:38.687 [2024-11-17 00:52:30.683737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.053 ms 00:20:38.687 [2024-11-17 00:52:30.683753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.692150] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:38.687 [2024-11-17 00:52:30.695592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.695632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:38.687 [2024-11-17 00:52:30.695653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.783 ms 00:20:38.687 [2024-11-17 00:52:30.695662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.695746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.695757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:38.687 [2024-11-17 00:52:30.695767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:38.687 [2024-11-17 00:52:30.695775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.695842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.695853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:38.687 [2024-11-17 00:52:30.695861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:38.687 [2024-11-17 00:52:30.695873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.695900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.695909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:38.687 [2024-11-17 00:52:30.695917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:38.687 [2024-11-17 00:52:30.695925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.695960] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:38.687 [2024-11-17 00:52:30.695973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.695981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:38.687 [2024-11-17 00:52:30.695989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:38.687 [2024-11-17 00:52:30.696000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.701404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.701451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:38.687 [2024-11-17 00:52:30.701463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.383 ms 00:20:38.687 [2024-11-17 00:52:30.701471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.701557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.687 [2024-11-17 00:52:30.701574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:38.687 [2024-11-17 00:52:30.701583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:38.687 [2024-11-17 00:52:30.701592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.687 [2024-11-17 00:52:30.703245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.058 ms, result 0 00:20:40.074  [2024-11-17T00:52:33.080Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-17T00:52:34.024Z] Copying: 24/1024 [MB] (10 MBps) [2024-11-17T00:52:34.968Z] Copying: 39/1024 [MB] (15 MBps) [2024-11-17T00:52:35.913Z] Copying: 54/1024 [MB] (14 MBps) [2024-11-17T00:52:36.857Z] Copying: 65/1024 [MB] (11 MBps) [2024-11-17T00:52:37.798Z] Copying: 75/1024 [MB] (10 MBps) [2024-11-17T00:52:38.740Z] Copying: 87/1024 [MB] (11 MBps) [2024-11-17T00:52:40.127Z] Copying: 97/1024 [MB] (10 MBps) [2024-11-17T00:52:41.068Z] Copying: 107/1024 [MB] (10 MBps) [2024-11-17T00:52:42.010Z] Copying: 117/1024 [MB] (10 MBps) [2024-11-17T00:52:42.952Z] Copying: 128/1024 [MB] (10 MBps) [2024-11-17T00:52:43.895Z] Copying: 138/1024 [MB] (10 MBps) [2024-11-17T00:52:44.836Z] Copying: 148/1024 [MB] (10 MBps) [2024-11-17T00:52:45.781Z] Copying: 162328/1048576 [kB] (10136 kBps) [2024-11-17T00:52:46.726Z] Copying: 168/1024 [MB] (10 MBps) [2024-11-17T00:52:48.115Z] Copying: 179/1024 [MB] (10 MBps) [2024-11-17T00:52:49.058Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-17T00:52:50.003Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-17T00:52:50.947Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-17T00:52:51.983Z] Copying: 223/1024 [MB] (11 MBps) [2024-11-17T00:52:52.929Z] Copying: 240/1024 [MB] (16 MBps) [2024-11-17T00:52:53.873Z] Copying: 256/1024 [MB] (16 MBps) [2024-11-17T00:52:54.815Z] Copying: 277/1024 [MB] (21 MBps) [2024-11-17T00:52:55.756Z] Copying: 299/1024 [MB] (21 MBps) [2024-11-17T00:52:57.136Z] Copying: 313/1024 [MB] (14 MBps) [2024-11-17T00:52:58.069Z] Copying: 333/1024 [MB] (19 MBps) [2024-11-17T00:52:59.012Z] Copying: 369/1024 [MB] (36 MBps) [2024-11-17T00:52:59.953Z] Copying: 400/1024 [MB] (30 MBps) [2024-11-17T00:53:00.892Z] Copying: 416/1024 [MB] (15 MBps) [2024-11-17T00:53:01.836Z] Copying: 442/1024 [MB] (26 MBps) [2024-11-17T00:53:02.779Z] Copying: 461/1024 [MB] (19 MBps) [2024-11-17T00:53:03.723Z] Copying: 476/1024 [MB] (15 MBps) [2024-11-17T00:53:05.114Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-17T00:53:06.056Z] Copying: 498/1024 [MB] (11 MBps) [2024-11-17T00:53:07.000Z] Copying: 509/1024 [MB] (10 MBps) [2024-11-17T00:53:07.943Z] Copying: 524/1024 [MB] (15 MBps) [2024-11-17T00:53:08.885Z] Copying: 539/1024 [MB] (14 MBps) [2024-11-17T00:53:09.830Z] Copying: 554/1024 [MB] (15 MBps) [2024-11-17T00:53:10.775Z] Copying: 567/1024 [MB] (12 MBps) [2024-11-17T00:53:11.720Z] Copying: 580/1024 [MB] (13 MBps) [2024-11-17T00:53:13.108Z] Copying: 594/1024 [MB] (14 MBps) [2024-11-17T00:53:14.053Z] Copying: 619288/1048576 [kB] (10168 kBps) [2024-11-17T00:53:14.998Z] Copying: 614/1024 [MB] (10 MBps) [2024-11-17T00:53:15.942Z] Copying: 639864/1048576 [kB] (10224 kBps) [2024-11-17T00:53:16.886Z] Copying: 650104/1048576 [kB] (10240 kBps) [2024-11-17T00:53:17.824Z] Copying: 645/1024 [MB] (10 MBps) [2024-11-17T00:53:18.769Z] Copying: 676/1024 [MB] (31 MBps) [2024-11-17T00:53:20.157Z] Copying: 692/1024 [MB] (15 MBps) [2024-11-17T00:53:20.730Z] Copying: 704/1024 [MB] (11 MBps) [2024-11-17T00:53:22.117Z] Copying: 731224/1048576 [kB] (10152 kBps) [2024-11-17T00:53:23.062Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-17T00:53:24.060Z] Copying: 752456/1048576 [kB] (10200 kBps) [2024-11-17T00:53:25.013Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-17T00:53:25.956Z] Copying: 782/1024 [MB] (36 MBps) [2024-11-17T00:53:26.901Z] Copying: 800/1024 [MB] (18 MBps) [2024-11-17T00:53:27.854Z] Copying: 820/1024 [MB] (19 MBps) [2024-11-17T00:53:28.800Z] Copying: 839/1024 [MB] (19 MBps) [2024-11-17T00:53:29.751Z] Copying: 857/1024 [MB] (17 MBps) [2024-11-17T00:53:31.136Z] Copying: 878/1024 [MB] (21 MBps) [2024-11-17T00:53:32.076Z] Copying: 897/1024 [MB] (18 MBps) [2024-11-17T00:53:33.018Z] Copying: 914/1024 [MB] (16 MBps) [2024-11-17T00:53:33.962Z] Copying: 932/1024 [MB] (18 MBps) [2024-11-17T00:53:34.906Z] Copying: 943/1024 [MB] (10 MBps) [2024-11-17T00:53:35.852Z] Copying: 975760/1048576 [kB] (10096 kBps) [2024-11-17T00:53:36.799Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-17T00:53:37.744Z] Copying: 973/1024 [MB] (10 MBps) [2024-11-17T00:53:39.132Z] Copying: 984/1024 [MB] (10 MBps) [2024-11-17T00:53:40.080Z] Copying: 999/1024 [MB] (15 MBps) [2024-11-17T00:53:41.021Z] Copying: 1034000/1048576 [kB] (10228 kBps) [2024-11-17T00:53:41.595Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-17T00:53:41.595Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-17 00:53:41.337717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.337796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:49.532 [2024-11-17 00:53:41.337815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.532 [2024-11-17 00:53:41.337825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.340265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:49.532 [2024-11-17 00:53:41.343162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.343214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:49.532 [2024-11-17 00:53:41.343230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:21:49.532 [2024-11-17 00:53:41.343240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.355366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.355408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:49.532 [2024-11-17 00:53:41.355429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.157 ms 00:21:49.532 [2024-11-17 00:53:41.355438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.380258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.380300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:49.532 [2024-11-17 00:53:41.380312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.802 ms 00:21:49.532 [2024-11-17 00:53:41.380321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.386972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.387015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:49.532 [2024-11-17 00:53:41.387032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.603 ms 00:21:49.532 [2024-11-17 00:53:41.387040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.389961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.390005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:49.532 [2024-11-17 00:53:41.390015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:21:49.532 [2024-11-17 00:53:41.390023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.394713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.394755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:49.532 [2024-11-17 00:53:41.394766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:21:49.532 [2024-11-17 00:53:41.394785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.565749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.565792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:49.532 [2024-11-17 00:53:41.565804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.912 ms 00:21:49.532 [2024-11-17 00:53:41.565813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.568612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.568652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:49.532 [2024-11-17 00:53:41.568663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:21:49.532 [2024-11-17 00:53:41.568671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.570713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.570754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:49.532 [2024-11-17 00:53:41.570764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.959 ms 00:21:49.532 [2024-11-17 00:53:41.570771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.572578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.572619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:49.532 [2024-11-17 00:53:41.572630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.765 ms 00:21:49.532 [2024-11-17 00:53:41.572637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.574290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.532 [2024-11-17 00:53:41.574331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:49.532 [2024-11-17 00:53:41.574341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:21:49.532 [2024-11-17 00:53:41.574349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.532 [2024-11-17 00:53:41.574405] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:49.532 [2024-11-17 00:53:41.574419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107008 / 261120 wr_cnt: 1 state: open 00:21:49.532 [2024-11-17 00:53:41.574430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:49.532 [2024-11-17 00:53:41.574634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.574998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:49.533 [2024-11-17 00:53:41.575207] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:49.533 [2024-11-17 00:53:41.575216] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15826fcd-9a39-4e0b-88df-ff1adb33f8d8 00:21:49.533 [2024-11-17 00:53:41.575225] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107008 00:21:49.533 [2024-11-17 00:53:41.575233] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107968 00:21:49.533 [2024-11-17 00:53:41.575240] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107008 00:21:49.533 [2024-11-17 00:53:41.575258] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:21:49.533 [2024-11-17 00:53:41.575265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:49.533 [2024-11-17 00:53:41.575273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:49.533 [2024-11-17 00:53:41.575281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:49.533 [2024-11-17 00:53:41.575288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:49.533 [2024-11-17 00:53:41.575295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:49.533 [2024-11-17 00:53:41.575302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.533 [2024-11-17 00:53:41.575310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:49.533 [2024-11-17 00:53:41.575319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:21:49.533 [2024-11-17 00:53:41.575327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.533 [2024-11-17 00:53:41.577696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.533 [2024-11-17 00:53:41.577728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:49.533 [2024-11-17 00:53:41.577739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.346 ms 00:21:49.533 [2024-11-17 00:53:41.577748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.533 [2024-11-17 00:53:41.577875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.534 [2024-11-17 00:53:41.577884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:49.534 [2024-11-17 00:53:41.577893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:49.534 [2024-11-17 00:53:41.577901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.534 [2024-11-17 00:53:41.584759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.534 [2024-11-17 00:53:41.584799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.534 [2024-11-17 00:53:41.584809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.534 [2024-11-17 00:53:41.584818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.534 [2024-11-17 00:53:41.584875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.534 [2024-11-17 00:53:41.584885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.534 [2024-11-17 00:53:41.584893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.534 [2024-11-17 00:53:41.584901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.534 [2024-11-17 00:53:41.584964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.534 [2024-11-17 00:53:41.584980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.534 [2024-11-17 00:53:41.584988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.534 [2024-11-17 00:53:41.584997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.534 [2024-11-17 00:53:41.585012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.534 [2024-11-17 00:53:41.585021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.534 [2024-11-17 00:53:41.585029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.534 [2024-11-17 00:53:41.585038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.599009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.599055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.795 [2024-11-17 00:53:41.599066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.599074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.609745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.609791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:49.795 [2024-11-17 00:53:41.609802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.609811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.609883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.609894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.795 [2024-11-17 00:53:41.609909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.609918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.609955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.609964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.795 [2024-11-17 00:53:41.609978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.609989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.610064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.610074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.795 [2024-11-17 00:53:41.610085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.610093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.610123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.610132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:49.795 [2024-11-17 00:53:41.610140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.610152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.610193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.610202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.795 [2024-11-17 00:53:41.610210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.610221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.610266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.795 [2024-11-17 00:53:41.610276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.795 [2024-11-17 00:53:41.610284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.795 [2024-11-17 00:53:41.610293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.795 [2024-11-17 00:53:41.610445] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.439 ms, result 0 00:21:50.367 00:21:50.367 00:21:50.367 00:53:42 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:50.367 [2024-11-17 00:53:42.369013] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:50.367 [2024-11-17 00:53:42.369151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88855 ] 00:21:50.628 [2024-11-17 00:53:42.521662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.628 [2024-11-17 00:53:42.572174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.891 [2024-11-17 00:53:42.690383] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:50.891 [2024-11-17 00:53:42.690462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:50.891 [2024-11-17 00:53:42.851568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.851622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:50.891 [2024-11-17 00:53:42.851645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:50.891 [2024-11-17 00:53:42.851654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.851710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.851722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.891 [2024-11-17 00:53:42.851731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:50.891 [2024-11-17 00:53:42.851739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.851762] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:50.891 [2024-11-17 00:53:42.852402] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:50.891 [2024-11-17 00:53:42.852456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.852470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.891 [2024-11-17 00:53:42.852481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:21:50.891 [2024-11-17 00:53:42.852493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.854180] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:50.891 [2024-11-17 00:53:42.857863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.857908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:50.891 [2024-11-17 00:53:42.857919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:21:50.891 [2024-11-17 00:53:42.857928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.858014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.858028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:50.891 [2024-11-17 00:53:42.858041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:50.891 [2024-11-17 00:53:42.858048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.865996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.866035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.891 [2024-11-17 00:53:42.866045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.905 ms 00:21:50.891 [2024-11-17 00:53:42.866056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.866163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.866174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.891 [2024-11-17 00:53:42.866187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:50.891 [2024-11-17 00:53:42.866195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.866252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.866267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:50.891 [2024-11-17 00:53:42.866276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:50.891 [2024-11-17 00:53:42.866288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.866317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:50.891 [2024-11-17 00:53:42.868303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.868338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.891 [2024-11-17 00:53:42.868349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:21:50.891 [2024-11-17 00:53:42.868377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.891 [2024-11-17 00:53:42.868419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.891 [2024-11-17 00:53:42.868427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:50.891 [2024-11-17 00:53:42.868436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:50.892 [2024-11-17 00:53:42.868443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.892 [2024-11-17 00:53:42.868466] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:50.892 [2024-11-17 00:53:42.868490] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:50.892 [2024-11-17 00:53:42.868527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:50.892 [2024-11-17 00:53:42.868542] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:50.892 [2024-11-17 00:53:42.868647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:50.892 [2024-11-17 00:53:42.868661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:50.892 [2024-11-17 00:53:42.868672] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:50.892 [2024-11-17 00:53:42.868701] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:50.892 [2024-11-17 00:53:42.868720] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:50.892 [2024-11-17 00:53:42.868732] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:50.892 [2024-11-17 00:53:42.868739] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:50.892 [2024-11-17 00:53:42.868747] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:50.892 [2024-11-17 00:53:42.868754] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:50.892 [2024-11-17 00:53:42.868762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.892 [2024-11-17 00:53:42.868769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:50.892 [2024-11-17 00:53:42.868777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:21:50.892 [2024-11-17 00:53:42.868784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.892 [2024-11-17 00:53:42.868870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.892 [2024-11-17 00:53:42.868888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:50.892 [2024-11-17 00:53:42.868896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:50.892 [2024-11-17 00:53:42.868904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.892 [2024-11-17 00:53:42.869011] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:50.892 [2024-11-17 00:53:42.869023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:50.892 [2024-11-17 00:53:42.869033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:50.892 [2024-11-17 00:53:42.869075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:50.892 [2024-11-17 00:53:42.869099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.892 [2024-11-17 00:53:42.869116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:50.892 [2024-11-17 00:53:42.869124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:50.892 [2024-11-17 00:53:42.869132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.892 [2024-11-17 00:53:42.869140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:50.892 [2024-11-17 00:53:42.869148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:50.892 [2024-11-17 00:53:42.869156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:50.892 [2024-11-17 00:53:42.869175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:50.892 [2024-11-17 00:53:42.869198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:50.892 [2024-11-17 00:53:42.869222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:50.892 [2024-11-17 00:53:42.869245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:50.892 [2024-11-17 00:53:42.869268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:50.892 [2024-11-17 00:53:42.869294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.892 [2024-11-17 00:53:42.869311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:50.892 [2024-11-17 00:53:42.869319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:50.892 [2024-11-17 00:53:42.869326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.892 [2024-11-17 00:53:42.869333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:50.892 [2024-11-17 00:53:42.869341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:50.892 [2024-11-17 00:53:42.869348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:50.892 [2024-11-17 00:53:42.869379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:50.892 [2024-11-17 00:53:42.869391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869399] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:50.892 [2024-11-17 00:53:42.869408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:50.892 [2024-11-17 00:53:42.869416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.892 [2024-11-17 00:53:42.869437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:50.892 [2024-11-17 00:53:42.869447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:50.892 [2024-11-17 00:53:42.869454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:50.892 [2024-11-17 00:53:42.869461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:50.892 [2024-11-17 00:53:42.869469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:50.892 [2024-11-17 00:53:42.869476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:50.892 [2024-11-17 00:53:42.869484] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:50.892 [2024-11-17 00:53:42.869493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.892 [2024-11-17 00:53:42.869502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:50.892 [2024-11-17 00:53:42.869509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:50.892 [2024-11-17 00:53:42.869517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:50.892 [2024-11-17 00:53:42.869524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:50.892 [2024-11-17 00:53:42.869531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:50.892 [2024-11-17 00:53:42.869538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:50.892 [2024-11-17 00:53:42.869545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:50.893 [2024-11-17 00:53:42.869553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:50.893 [2024-11-17 00:53:42.869560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:50.893 [2024-11-17 00:53:42.869569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:50.893 [2024-11-17 00:53:42.869576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:50.893 [2024-11-17 00:53:42.869584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:50.893 [2024-11-17 00:53:42.869591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:50.893 [2024-11-17 00:53:42.869598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:50.893 [2024-11-17 00:53:42.869606] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:50.893 [2024-11-17 00:53:42.869614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.893 [2024-11-17 00:53:42.869621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:50.893 [2024-11-17 00:53:42.869629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:50.893 [2024-11-17 00:53:42.869636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:50.893 [2024-11-17 00:53:42.869645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:50.893 [2024-11-17 00:53:42.869653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.869661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:50.893 [2024-11-17 00:53:42.869669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:21:50.893 [2024-11-17 00:53:42.869677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.891168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.891219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.893 [2024-11-17 00:53:42.891233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.444 ms 00:21:50.893 [2024-11-17 00:53:42.891255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.891352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.891380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:50.893 [2024-11-17 00:53:42.891394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:50.893 [2024-11-17 00:53:42.891402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.902993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.903038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.893 [2024-11-17 00:53:42.903049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.524 ms 00:21:50.893 [2024-11-17 00:53:42.903057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.903093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.903108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.893 [2024-11-17 00:53:42.903117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:50.893 [2024-11-17 00:53:42.903125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.903727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.903765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.893 [2024-11-17 00:53:42.903777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:21:50.893 [2024-11-17 00:53:42.903794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.903948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.903963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.893 [2024-11-17 00:53:42.903973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:50.893 [2024-11-17 00:53:42.903982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.910805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.910847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.893 [2024-11-17 00:53:42.910864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.797 ms 00:21:50.893 [2024-11-17 00:53:42.910872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.914598] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:50.893 [2024-11-17 00:53:42.914641] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:50.893 [2024-11-17 00:53:42.914661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.914669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:50.893 [2024-11-17 00:53:42.914678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:21:50.893 [2024-11-17 00:53:42.914690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.930333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.930382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:50.893 [2024-11-17 00:53:42.930403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.591 ms 00:21:50.893 [2024-11-17 00:53:42.930411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.933273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.933313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:50.893 [2024-11-17 00:53:42.933323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:21:50.893 [2024-11-17 00:53:42.933331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.935831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.935869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:50.893 [2024-11-17 00:53:42.935879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:21:50.893 [2024-11-17 00:53:42.935887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.893 [2024-11-17 00:53:42.936234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.893 [2024-11-17 00:53:42.936251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:50.893 [2024-11-17 00:53:42.936260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:21:50.893 [2024-11-17 00:53:42.936270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.959961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.960027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:51.155 [2024-11-17 00:53:42.960041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.672 ms 00:21:51.155 [2024-11-17 00:53:42.960049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.968294] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:51.155 [2024-11-17 00:53:42.971695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.971731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:51.155 [2024-11-17 00:53:42.971751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.595 ms 00:21:51.155 [2024-11-17 00:53:42.971759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.971837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.971849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:51.155 [2024-11-17 00:53:42.971859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:51.155 [2024-11-17 00:53:42.971870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.973664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.973701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:51.155 [2024-11-17 00:53:42.973712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:21:51.155 [2024-11-17 00:53:42.973723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.973750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.973759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:51.155 [2024-11-17 00:53:42.973767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:51.155 [2024-11-17 00:53:42.973775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.973819] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:51.155 [2024-11-17 00:53:42.973832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.973840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:51.155 [2024-11-17 00:53:42.973849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:51.155 [2024-11-17 00:53:42.973857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.979523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.979565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:51.155 [2024-11-17 00:53:42.979587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.640 ms 00:21:51.155 [2024-11-17 00:53:42.979595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.979679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.155 [2024-11-17 00:53:42.979689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:51.155 [2024-11-17 00:53:42.979698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:51.155 [2024-11-17 00:53:42.979707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.155 [2024-11-17 00:53:42.981134] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.082 ms, result 0 00:21:52.543  [2024-11-17T00:53:45.179Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-17T00:53:46.566Z] Copying: 31/1024 [MB] (19 MBps) [2024-11-17T00:53:47.511Z] Copying: 45/1024 [MB] (14 MBps) [2024-11-17T00:53:48.457Z] Copying: 63/1024 [MB] (18 MBps) [2024-11-17T00:53:49.402Z] Copying: 81/1024 [MB] (17 MBps) [2024-11-17T00:53:50.345Z] Copying: 101/1024 [MB] (19 MBps) [2024-11-17T00:53:51.288Z] Copying: 119/1024 [MB] (18 MBps) [2024-11-17T00:53:52.231Z] Copying: 137/1024 [MB] (18 MBps) [2024-11-17T00:53:53.175Z] Copying: 150/1024 [MB] (12 MBps) [2024-11-17T00:53:54.627Z] Copying: 165/1024 [MB] (15 MBps) [2024-11-17T00:53:55.246Z] Copying: 184/1024 [MB] (18 MBps) [2024-11-17T00:53:56.190Z] Copying: 199/1024 [MB] (15 MBps) [2024-11-17T00:53:57.577Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-17T00:53:58.520Z] Copying: 226/1024 [MB] (14 MBps) [2024-11-17T00:53:59.464Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-17T00:54:00.406Z] Copying: 247/1024 [MB] (10 MBps) [2024-11-17T00:54:01.349Z] Copying: 267/1024 [MB] (19 MBps) [2024-11-17T00:54:02.290Z] Copying: 278/1024 [MB] (11 MBps) [2024-11-17T00:54:03.232Z] Copying: 289/1024 [MB] (10 MBps) [2024-11-17T00:54:04.177Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-17T00:54:05.562Z] Copying: 310/1024 [MB] (10 MBps) [2024-11-17T00:54:06.507Z] Copying: 320/1024 [MB] (10 MBps) [2024-11-17T00:54:07.451Z] Copying: 331/1024 [MB] (11 MBps) [2024-11-17T00:54:08.393Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-17T00:54:09.339Z] Copying: 353/1024 [MB] (10 MBps) [2024-11-17T00:54:10.282Z] Copying: 370/1024 [MB] (17 MBps) [2024-11-17T00:54:11.227Z] Copying: 381/1024 [MB] (11 MBps) [2024-11-17T00:54:12.173Z] Copying: 392/1024 [MB] (10 MBps) [2024-11-17T00:54:13.560Z] Copying: 403/1024 [MB] (11 MBps) [2024-11-17T00:54:14.504Z] Copying: 414/1024 [MB] (10 MBps) [2024-11-17T00:54:15.448Z] Copying: 429/1024 [MB] (15 MBps) [2024-11-17T00:54:16.394Z] Copying: 446/1024 [MB] (16 MBps) [2024-11-17T00:54:17.338Z] Copying: 462/1024 [MB] (16 MBps) [2024-11-17T00:54:18.278Z] Copying: 479/1024 [MB] (16 MBps) [2024-11-17T00:54:19.223Z] Copying: 500/1024 [MB] (20 MBps) [2024-11-17T00:54:20.612Z] Copying: 516/1024 [MB] (16 MBps) [2024-11-17T00:54:21.185Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-17T00:54:22.575Z] Copying: 538/1024 [MB] (10 MBps) [2024-11-17T00:54:23.520Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-17T00:54:24.461Z] Copying: 559/1024 [MB] (10 MBps) [2024-11-17T00:54:25.405Z] Copying: 581/1024 [MB] (22 MBps) [2024-11-17T00:54:26.348Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-17T00:54:27.369Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-17T00:54:28.310Z] Copying: 625/1024 [MB] (22 MBps) [2024-11-17T00:54:29.251Z] Copying: 640/1024 [MB] (15 MBps) [2024-11-17T00:54:30.190Z] Copying: 655/1024 [MB] (15 MBps) [2024-11-17T00:54:31.571Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-17T00:54:32.517Z] Copying: 681/1024 [MB] (14 MBps) [2024-11-17T00:54:33.462Z] Copying: 695/1024 [MB] (13 MBps) [2024-11-17T00:54:34.401Z] Copying: 707/1024 [MB] (12 MBps) [2024-11-17T00:54:35.341Z] Copying: 720/1024 [MB] (13 MBps) [2024-11-17T00:54:36.284Z] Copying: 741/1024 [MB] (20 MBps) [2024-11-17T00:54:37.226Z] Copying: 755/1024 [MB] (14 MBps) [2024-11-17T00:54:38.610Z] Copying: 773/1024 [MB] (17 MBps) [2024-11-17T00:54:39.182Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-17T00:54:40.567Z] Copying: 795/1024 [MB] (10 MBps) [2024-11-17T00:54:41.510Z] Copying: 809/1024 [MB] (14 MBps) [2024-11-17T00:54:42.452Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-17T00:54:43.397Z] Copying: 838/1024 [MB] (18 MBps) [2024-11-17T00:54:44.340Z] Copying: 856/1024 [MB] (18 MBps) [2024-11-17T00:54:45.284Z] Copying: 872/1024 [MB] (15 MBps) [2024-11-17T00:54:46.225Z] Copying: 887/1024 [MB] (14 MBps) [2024-11-17T00:54:47.610Z] Copying: 905/1024 [MB] (18 MBps) [2024-11-17T00:54:48.180Z] Copying: 918/1024 [MB] (13 MBps) [2024-11-17T00:54:49.564Z] Copying: 933/1024 [MB] (15 MBps) [2024-11-17T00:54:50.507Z] Copying: 948/1024 [MB] (14 MBps) [2024-11-17T00:54:51.449Z] Copying: 967/1024 [MB] (19 MBps) [2024-11-17T00:54:52.391Z] Copying: 980/1024 [MB] (12 MBps) [2024-11-17T00:54:53.335Z] Copying: 995/1024 [MB] (15 MBps) [2024-11-17T00:54:54.280Z] Copying: 1012/1024 [MB] (16 MBps) [2024-11-17T00:54:54.280Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-17 00:54:54.111456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.217 [2024-11-17 00:54:54.111532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:02.217 [2024-11-17 00:54:54.111549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.217 [2024-11-17 00:54:54.111563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.217 [2024-11-17 00:54:54.111588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.217 [2024-11-17 00:54:54.112329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.217 [2024-11-17 00:54:54.112393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:02.217 [2024-11-17 00:54:54.112406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:23:02.217 [2024-11-17 00:54:54.112416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.217 [2024-11-17 00:54:54.112668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.217 [2024-11-17 00:54:54.112680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:02.217 [2024-11-17 00:54:54.112691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:23:02.217 [2024-11-17 00:54:54.112700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.217 [2024-11-17 00:54:54.119825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.217 [2024-11-17 00:54:54.119874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:02.217 [2024-11-17 00:54:54.119887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:23:02.217 [2024-11-17 00:54:54.119899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.217 [2024-11-17 00:54:54.128032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.217 [2024-11-17 00:54:54.128084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:02.217 [2024-11-17 00:54:54.128096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.084 ms 00:23:02.217 [2024-11-17 00:54:54.128105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.217 [2024-11-17 00:54:54.131088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.217 [2024-11-17 00:54:54.131138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:02.217 [2024-11-17 00:54:54.131149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:23:02.217 [2024-11-17 00:54:54.131156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.217 [2024-11-17 00:54:54.136226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.217 [2024-11-17 00:54:54.136276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:02.217 [2024-11-17 00:54:54.136287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.021 ms 00:23:02.217 [2024-11-17 00:54:54.136296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.480 [2024-11-17 00:54:54.381994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.480 [2024-11-17 00:54:54.382076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:02.480 [2024-11-17 00:54:54.382093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 245.644 ms 00:23:02.480 [2024-11-17 00:54:54.382111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.480 [2024-11-17 00:54:54.384901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.480 [2024-11-17 00:54:54.384948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:02.480 [2024-11-17 00:54:54.384959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:23:02.480 [2024-11-17 00:54:54.384968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.480 [2024-11-17 00:54:54.387076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.480 [2024-11-17 00:54:54.387136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:02.480 [2024-11-17 00:54:54.387146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:23:02.480 [2024-11-17 00:54:54.387154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.480 [2024-11-17 00:54:54.389068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.480 [2024-11-17 00:54:54.389112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:02.480 [2024-11-17 00:54:54.389121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:23:02.480 [2024-11-17 00:54:54.389129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.480 [2024-11-17 00:54:54.390928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.480 [2024-11-17 00:54:54.390972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:02.480 [2024-11-17 00:54:54.390983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:23:02.480 [2024-11-17 00:54:54.390991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.480 [2024-11-17 00:54:54.391034] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:02.480 [2024-11-17 00:54:54.391051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:02.480 [2024-11-17 00:54:54.391061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:02.480 [2024-11-17 00:54:54.391555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:02.481 [2024-11-17 00:54:54.391886] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:02.481 [2024-11-17 00:54:54.391894] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15826fcd-9a39-4e0b-88df-ff1adb33f8d8 00:23:02.481 [2024-11-17 00:54:54.391902] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:02.481 [2024-11-17 00:54:54.391909] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25024 00:23:02.481 [2024-11-17 00:54:54.391917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24064 00:23:02.481 [2024-11-17 00:54:54.391941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0399 00:23:02.481 [2024-11-17 00:54:54.391949] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:02.481 [2024-11-17 00:54:54.391957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:02.481 [2024-11-17 00:54:54.391965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:02.481 [2024-11-17 00:54:54.391974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:02.481 [2024-11-17 00:54:54.391981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:02.481 [2024-11-17 00:54:54.391989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.481 [2024-11-17 00:54:54.391997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:02.481 [2024-11-17 00:54:54.392006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:23:02.481 [2024-11-17 00:54:54.392013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.394469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.481 [2024-11-17 00:54:54.394522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:02.481 [2024-11-17 00:54:54.394538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:23:02.481 [2024-11-17 00:54:54.394547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.394673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.481 [2024-11-17 00:54:54.394683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:02.481 [2024-11-17 00:54:54.394692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:02.481 [2024-11-17 00:54:54.394700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.401778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.401825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.481 [2024-11-17 00:54:54.401835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.401843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.401907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.401918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.481 [2024-11-17 00:54:54.401926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.401934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.401998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.402016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.481 [2024-11-17 00:54:54.402025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.402033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.402049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.402057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.481 [2024-11-17 00:54:54.402066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.402074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.416098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.416143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.481 [2024-11-17 00:54:54.416155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.416164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.426469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.426518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.481 [2024-11-17 00:54:54.426538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.426547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.426640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.426652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.481 [2024-11-17 00:54:54.426664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.426673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.426710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.426720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.481 [2024-11-17 00:54:54.426729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.426736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.426803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.426820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.481 [2024-11-17 00:54:54.426830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.426841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.481 [2024-11-17 00:54:54.426869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.481 [2024-11-17 00:54:54.426880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:02.481 [2024-11-17 00:54:54.426889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.481 [2024-11-17 00:54:54.426897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-17 00:54:54.426933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.482 [2024-11-17 00:54:54.426945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.482 [2024-11-17 00:54:54.426953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.482 [2024-11-17 00:54:54.426969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-17 00:54:54.427013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.482 [2024-11-17 00:54:54.427035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.482 [2024-11-17 00:54:54.427044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.482 [2024-11-17 00:54:54.427052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.482 [2024-11-17 00:54:54.427183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 315.694 ms, result 0 00:23:02.743 00:23:02.743 00:23:02.743 00:54:54 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.290 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:05.290 00:54:56 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:05.290 00:54:56 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:05.290 00:54:56 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86490 00:23:05.290 00:54:57 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86490 ']' 00:23:05.290 Process with pid 86490 is not found 00:23:05.290 00:54:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86490 00:23:05.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86490) - No such process 00:23:05.290 00:54:57 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86490 is not found' 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:05.290 Remove shared memory files 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:05.290 00:54:57 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:05.290 ************************************ 00:23:05.290 END TEST ftl_restore 00:23:05.290 ************************************ 00:23:05.290 00:23:05.290 real 5m2.231s 00:23:05.290 user 4m49.866s 00:23:05.290 sys 0m12.102s 00:23:05.290 00:54:57 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.290 00:54:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:05.290 00:54:57 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:05.290 00:54:57 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:05.290 00:54:57 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.290 00:54:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:05.290 ************************************ 00:23:05.290 START TEST ftl_dirty_shutdown 00:23:05.290 ************************************ 00:23:05.290 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:05.290 * Looking for test storage... 00:23:05.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.290 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:05.290 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:05.290 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:05.290 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:05.290 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.290 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:05.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.291 --rc genhtml_branch_coverage=1 00:23:05.291 --rc genhtml_function_coverage=1 00:23:05.291 --rc genhtml_legend=1 00:23:05.291 --rc geninfo_all_blocks=1 00:23:05.291 --rc geninfo_unexecuted_blocks=1 00:23:05.291 00:23:05.291 ' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:05.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.291 --rc genhtml_branch_coverage=1 00:23:05.291 --rc genhtml_function_coverage=1 00:23:05.291 --rc genhtml_legend=1 00:23:05.291 --rc geninfo_all_blocks=1 00:23:05.291 --rc geninfo_unexecuted_blocks=1 00:23:05.291 00:23:05.291 ' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:05.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.291 --rc genhtml_branch_coverage=1 00:23:05.291 --rc genhtml_function_coverage=1 00:23:05.291 --rc genhtml_legend=1 00:23:05.291 --rc geninfo_all_blocks=1 00:23:05.291 --rc geninfo_unexecuted_blocks=1 00:23:05.291 00:23:05.291 ' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:05.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.291 --rc genhtml_branch_coverage=1 00:23:05.291 --rc genhtml_function_coverage=1 00:23:05.291 --rc genhtml_legend=1 00:23:05.291 --rc geninfo_all_blocks=1 00:23:05.291 --rc geninfo_unexecuted_blocks=1 00:23:05.291 00:23:05.291 ' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89693 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89693 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89693 ']' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:05.291 00:54:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.552 [2024-11-17 00:54:57.357437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:05.552 [2024-11-17 00:54:57.358060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89693 ] 00:23:05.552 [2024-11-17 00:54:57.508104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.552 [2024-11-17 00:54:57.547791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:06.559 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:06.845 { 00:23:06.845 "name": "nvme0n1", 00:23:06.845 "aliases": [ 00:23:06.845 "8133b003-8f6f-4170-b3ff-1f0c2dd24cee" 00:23:06.845 ], 00:23:06.845 "product_name": "NVMe disk", 00:23:06.845 "block_size": 4096, 00:23:06.845 "num_blocks": 1310720, 00:23:06.845 "uuid": "8133b003-8f6f-4170-b3ff-1f0c2dd24cee", 00:23:06.845 "numa_id": -1, 00:23:06.845 "assigned_rate_limits": { 00:23:06.845 "rw_ios_per_sec": 0, 00:23:06.845 "rw_mbytes_per_sec": 0, 00:23:06.845 "r_mbytes_per_sec": 0, 00:23:06.845 "w_mbytes_per_sec": 0 00:23:06.845 }, 00:23:06.845 "claimed": true, 00:23:06.845 "claim_type": "read_many_write_one", 00:23:06.845 "zoned": false, 00:23:06.845 "supported_io_types": { 00:23:06.845 "read": true, 00:23:06.845 "write": true, 00:23:06.845 "unmap": true, 00:23:06.845 "flush": true, 00:23:06.845 "reset": true, 00:23:06.845 "nvme_admin": true, 00:23:06.845 "nvme_io": true, 00:23:06.845 "nvme_io_md": false, 00:23:06.845 "write_zeroes": true, 00:23:06.845 "zcopy": false, 00:23:06.845 "get_zone_info": false, 00:23:06.845 "zone_management": false, 00:23:06.845 "zone_append": false, 00:23:06.845 "compare": true, 00:23:06.845 "compare_and_write": false, 00:23:06.845 "abort": true, 00:23:06.845 "seek_hole": false, 00:23:06.845 "seek_data": false, 00:23:06.845 "copy": true, 00:23:06.845 "nvme_iov_md": false 00:23:06.845 }, 00:23:06.845 "driver_specific": { 00:23:06.845 "nvme": [ 00:23:06.845 { 00:23:06.845 "pci_address": "0000:00:11.0", 00:23:06.845 "trid": { 00:23:06.845 "trtype": "PCIe", 00:23:06.845 "traddr": "0000:00:11.0" 00:23:06.845 }, 00:23:06.845 "ctrlr_data": { 00:23:06.845 "cntlid": 0, 00:23:06.845 "vendor_id": "0x1b36", 00:23:06.845 "model_number": "QEMU NVMe Ctrl", 00:23:06.845 "serial_number": "12341", 00:23:06.845 "firmware_revision": "8.0.0", 00:23:06.845 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:06.845 "oacs": { 00:23:06.845 "security": 0, 00:23:06.845 "format": 1, 00:23:06.845 "firmware": 0, 00:23:06.845 "ns_manage": 1 00:23:06.845 }, 00:23:06.845 "multi_ctrlr": false, 00:23:06.845 "ana_reporting": false 00:23:06.845 }, 00:23:06.845 "vs": { 00:23:06.845 "nvme_version": "1.4" 00:23:06.845 }, 00:23:06.845 "ns_data": { 00:23:06.845 "id": 1, 00:23:06.845 "can_share": false 00:23:06.845 } 00:23:06.845 } 00:23:06.845 ], 00:23:06.845 "mp_policy": "active_passive" 00:23:06.845 } 00:23:06.845 } 00:23:06.845 ]' 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:06.845 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:07.113 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=f97ef6d5-c1d6-4d91-9a85-545fcf30333b 00:23:07.113 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:07.113 00:54:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f97ef6d5-c1d6-4d91-9a85-545fcf30333b 00:23:07.373 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=64aad473-7af9-4910-9850-8a822aee56a5 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 64aad473-7af9-4910-9850-8a822aee56a5 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f9342aab-cb52-400e-a16c-9265d5d23456 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f9342aab-cb52-400e-a16c-9265d5d23456 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f9342aab-cb52-400e-a16c-9265d5d23456 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f9342aab-cb52-400e-a16c-9265d5d23456 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f9342aab-cb52-400e-a16c-9265d5d23456 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:07.633 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f9342aab-cb52-400e-a16c-9265d5d23456 00:23:07.894 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:07.894 { 00:23:07.894 "name": "f9342aab-cb52-400e-a16c-9265d5d23456", 00:23:07.894 "aliases": [ 00:23:07.894 "lvs/nvme0n1p0" 00:23:07.894 ], 00:23:07.894 "product_name": "Logical Volume", 00:23:07.894 "block_size": 4096, 00:23:07.894 "num_blocks": 26476544, 00:23:07.894 "uuid": "f9342aab-cb52-400e-a16c-9265d5d23456", 00:23:07.894 "assigned_rate_limits": { 00:23:07.894 "rw_ios_per_sec": 0, 00:23:07.894 "rw_mbytes_per_sec": 0, 00:23:07.894 "r_mbytes_per_sec": 0, 00:23:07.894 "w_mbytes_per_sec": 0 00:23:07.894 }, 00:23:07.894 "claimed": false, 00:23:07.894 "zoned": false, 00:23:07.894 "supported_io_types": { 00:23:07.894 "read": true, 00:23:07.894 "write": true, 00:23:07.894 "unmap": true, 00:23:07.894 "flush": false, 00:23:07.894 "reset": true, 00:23:07.894 "nvme_admin": false, 00:23:07.894 "nvme_io": false, 00:23:07.894 "nvme_io_md": false, 00:23:07.894 "write_zeroes": true, 00:23:07.894 "zcopy": false, 00:23:07.894 "get_zone_info": false, 00:23:07.894 "zone_management": false, 00:23:07.894 "zone_append": false, 00:23:07.894 "compare": false, 00:23:07.895 "compare_and_write": false, 00:23:07.895 "abort": false, 00:23:07.895 "seek_hole": true, 00:23:07.895 "seek_data": true, 00:23:07.895 "copy": false, 00:23:07.895 "nvme_iov_md": false 00:23:07.895 }, 00:23:07.895 "driver_specific": { 00:23:07.895 "lvol": { 00:23:07.895 "lvol_store_uuid": "64aad473-7af9-4910-9850-8a822aee56a5", 00:23:07.895 "base_bdev": "nvme0n1", 00:23:07.895 "thin_provision": true, 00:23:07.895 "num_allocated_clusters": 0, 00:23:07.895 "snapshot": false, 00:23:07.895 "clone": false, 00:23:07.895 "esnap_clone": false 00:23:07.895 } 00:23:07.895 } 00:23:07.895 } 00:23:07.895 ]' 00:23:07.895 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:07.895 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:07.895 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:08.156 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:08.156 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:08.156 00:54:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:08.156 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:08.156 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:08.156 00:54:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:08.415 00:55:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:08.415 00:55:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:08.415 00:55:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f9342aab-cb52-400e-a16c-9265d5d23456 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f9342aab-cb52-400e-a16c-9265d5d23456 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f9342aab-cb52-400e-a16c-9265d5d23456 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:08.416 { 00:23:08.416 "name": "f9342aab-cb52-400e-a16c-9265d5d23456", 00:23:08.416 "aliases": [ 00:23:08.416 "lvs/nvme0n1p0" 00:23:08.416 ], 00:23:08.416 "product_name": "Logical Volume", 00:23:08.416 "block_size": 4096, 00:23:08.416 "num_blocks": 26476544, 00:23:08.416 "uuid": "f9342aab-cb52-400e-a16c-9265d5d23456", 00:23:08.416 "assigned_rate_limits": { 00:23:08.416 "rw_ios_per_sec": 0, 00:23:08.416 "rw_mbytes_per_sec": 0, 00:23:08.416 "r_mbytes_per_sec": 0, 00:23:08.416 "w_mbytes_per_sec": 0 00:23:08.416 }, 00:23:08.416 "claimed": false, 00:23:08.416 "zoned": false, 00:23:08.416 "supported_io_types": { 00:23:08.416 "read": true, 00:23:08.416 "write": true, 00:23:08.416 "unmap": true, 00:23:08.416 "flush": false, 00:23:08.416 "reset": true, 00:23:08.416 "nvme_admin": false, 00:23:08.416 "nvme_io": false, 00:23:08.416 "nvme_io_md": false, 00:23:08.416 "write_zeroes": true, 00:23:08.416 "zcopy": false, 00:23:08.416 "get_zone_info": false, 00:23:08.416 "zone_management": false, 00:23:08.416 "zone_append": false, 00:23:08.416 "compare": false, 00:23:08.416 "compare_and_write": false, 00:23:08.416 "abort": false, 00:23:08.416 "seek_hole": true, 00:23:08.416 "seek_data": true, 00:23:08.416 "copy": false, 00:23:08.416 "nvme_iov_md": false 00:23:08.416 }, 00:23:08.416 "driver_specific": { 00:23:08.416 "lvol": { 00:23:08.416 "lvol_store_uuid": "64aad473-7af9-4910-9850-8a822aee56a5", 00:23:08.416 "base_bdev": "nvme0n1", 00:23:08.416 "thin_provision": true, 00:23:08.416 "num_allocated_clusters": 0, 00:23:08.416 "snapshot": false, 00:23:08.416 "clone": false, 00:23:08.416 "esnap_clone": false 00:23:08.416 } 00:23:08.416 } 00:23:08.416 } 00:23:08.416 ]' 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:08.416 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f9342aab-cb52-400e-a16c-9265d5d23456 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f9342aab-cb52-400e-a16c-9265d5d23456 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:08.676 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f9342aab-cb52-400e-a16c-9265d5d23456 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:08.935 { 00:23:08.935 "name": "f9342aab-cb52-400e-a16c-9265d5d23456", 00:23:08.935 "aliases": [ 00:23:08.935 "lvs/nvme0n1p0" 00:23:08.935 ], 00:23:08.935 "product_name": "Logical Volume", 00:23:08.935 "block_size": 4096, 00:23:08.935 "num_blocks": 26476544, 00:23:08.935 "uuid": "f9342aab-cb52-400e-a16c-9265d5d23456", 00:23:08.935 "assigned_rate_limits": { 00:23:08.935 "rw_ios_per_sec": 0, 00:23:08.935 "rw_mbytes_per_sec": 0, 00:23:08.935 "r_mbytes_per_sec": 0, 00:23:08.935 "w_mbytes_per_sec": 0 00:23:08.935 }, 00:23:08.935 "claimed": false, 00:23:08.935 "zoned": false, 00:23:08.935 "supported_io_types": { 00:23:08.935 "read": true, 00:23:08.935 "write": true, 00:23:08.935 "unmap": true, 00:23:08.935 "flush": false, 00:23:08.935 "reset": true, 00:23:08.935 "nvme_admin": false, 00:23:08.935 "nvme_io": false, 00:23:08.935 "nvme_io_md": false, 00:23:08.935 "write_zeroes": true, 00:23:08.935 "zcopy": false, 00:23:08.935 "get_zone_info": false, 00:23:08.935 "zone_management": false, 00:23:08.935 "zone_append": false, 00:23:08.935 "compare": false, 00:23:08.935 "compare_and_write": false, 00:23:08.935 "abort": false, 00:23:08.935 "seek_hole": true, 00:23:08.935 "seek_data": true, 00:23:08.935 "copy": false, 00:23:08.935 "nvme_iov_md": false 00:23:08.935 }, 00:23:08.935 "driver_specific": { 00:23:08.935 "lvol": { 00:23:08.935 "lvol_store_uuid": "64aad473-7af9-4910-9850-8a822aee56a5", 00:23:08.935 "base_bdev": "nvme0n1", 00:23:08.935 "thin_provision": true, 00:23:08.935 "num_allocated_clusters": 0, 00:23:08.935 "snapshot": false, 00:23:08.935 "clone": false, 00:23:08.935 "esnap_clone": false 00:23:08.935 } 00:23:08.935 } 00:23:08.935 } 00:23:08.935 ]' 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f9342aab-cb52-400e-a16c-9265d5d23456 --l2p_dram_limit 10' 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:08.935 00:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f9342aab-cb52-400e-a16c-9265d5d23456 --l2p_dram_limit 10 -c nvc0n1p0 00:23:09.196 [2024-11-17 00:55:01.156523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.196 [2024-11-17 00:55:01.156561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:09.196 [2024-11-17 00:55:01.156572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:09.196 [2024-11-17 00:55:01.156582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.196 [2024-11-17 00:55:01.156621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.196 [2024-11-17 00:55:01.156629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:09.196 [2024-11-17 00:55:01.156636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:09.196 [2024-11-17 00:55:01.156645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.196 [2024-11-17 00:55:01.156664] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:09.196 [2024-11-17 00:55:01.156870] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:09.196 [2024-11-17 00:55:01.156883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.196 [2024-11-17 00:55:01.156892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:09.196 [2024-11-17 00:55:01.156900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:23:09.196 [2024-11-17 00:55:01.156907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.196 [2024-11-17 00:55:01.157028] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7ce98c52-c64d-4fbe-a40a-a5fd6631e08d 00:23:09.196 [2024-11-17 00:55:01.157977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.196 [2024-11-17 00:55:01.158003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:09.197 [2024-11-17 00:55:01.158012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:09.197 [2024-11-17 00:55:01.158019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.162742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.197 [2024-11-17 00:55:01.162766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:09.197 [2024-11-17 00:55:01.162778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:23:09.197 [2024-11-17 00:55:01.162785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.162843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.197 [2024-11-17 00:55:01.162850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:09.197 [2024-11-17 00:55:01.162862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:09.197 [2024-11-17 00:55:01.162869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.162900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.197 [2024-11-17 00:55:01.162907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:09.197 [2024-11-17 00:55:01.162914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:09.197 [2024-11-17 00:55:01.162920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.162937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.197 [2024-11-17 00:55:01.164194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.197 [2024-11-17 00:55:01.164220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:09.197 [2024-11-17 00:55:01.164230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:23:09.197 [2024-11-17 00:55:01.164237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.164262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.197 [2024-11-17 00:55:01.164270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:09.197 [2024-11-17 00:55:01.164276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:09.197 [2024-11-17 00:55:01.164285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.164297] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:09.197 [2024-11-17 00:55:01.164424] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:09.197 [2024-11-17 00:55:01.164435] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:09.197 [2024-11-17 00:55:01.164446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:09.197 [2024-11-17 00:55:01.164453] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164467] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164474] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:09.197 [2024-11-17 00:55:01.164483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:09.197 [2024-11-17 00:55:01.164489] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:09.197 [2024-11-17 00:55:01.164496] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:09.197 [2024-11-17 00:55:01.164503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.197 [2024-11-17 00:55:01.164511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:09.197 [2024-11-17 00:55:01.164518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:09.197 [2024-11-17 00:55:01.164525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.164595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.197 [2024-11-17 00:55:01.164608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:09.197 [2024-11-17 00:55:01.164614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:09.197 [2024-11-17 00:55:01.164621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.197 [2024-11-17 00:55:01.164698] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:09.197 [2024-11-17 00:55:01.164711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:09.197 [2024-11-17 00:55:01.164718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:09.197 [2024-11-17 00:55:01.164748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:09.197 [2024-11-17 00:55:01.164766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.197 [2024-11-17 00:55:01.164779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:09.197 [2024-11-17 00:55:01.164786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:09.197 [2024-11-17 00:55:01.164791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.197 [2024-11-17 00:55:01.164799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:09.197 [2024-11-17 00:55:01.164804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:09.197 [2024-11-17 00:55:01.164812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:09.197 [2024-11-17 00:55:01.164823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:09.197 [2024-11-17 00:55:01.164843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:09.197 [2024-11-17 00:55:01.164863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:09.197 [2024-11-17 00:55:01.164882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:09.197 [2024-11-17 00:55:01.164905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.197 [2024-11-17 00:55:01.164918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:09.197 [2024-11-17 00:55:01.164924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.197 [2024-11-17 00:55:01.164937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:09.197 [2024-11-17 00:55:01.164944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:09.197 [2024-11-17 00:55:01.164949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.197 [2024-11-17 00:55:01.164957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:09.197 [2024-11-17 00:55:01.164962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:09.197 [2024-11-17 00:55:01.164969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:09.197 [2024-11-17 00:55:01.164984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:09.197 [2024-11-17 00:55:01.164991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.197 [2024-11-17 00:55:01.164998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:09.197 [2024-11-17 00:55:01.165004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:09.197 [2024-11-17 00:55:01.165013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.197 [2024-11-17 00:55:01.165022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.197 [2024-11-17 00:55:01.165030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:09.197 [2024-11-17 00:55:01.165036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:09.197 [2024-11-17 00:55:01.165043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:09.197 [2024-11-17 00:55:01.165048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:09.197 [2024-11-17 00:55:01.165057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:09.197 [2024-11-17 00:55:01.165063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:09.197 [2024-11-17 00:55:01.165075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:09.197 [2024-11-17 00:55:01.165083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.197 [2024-11-17 00:55:01.165092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:09.197 [2024-11-17 00:55:01.165098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:09.197 [2024-11-17 00:55:01.165106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:09.197 [2024-11-17 00:55:01.165113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:09.197 [2024-11-17 00:55:01.165121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:09.198 [2024-11-17 00:55:01.165128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:09.198 [2024-11-17 00:55:01.165136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:09.198 [2024-11-17 00:55:01.165142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:09.198 [2024-11-17 00:55:01.165150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:09.198 [2024-11-17 00:55:01.165157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:09.198 [2024-11-17 00:55:01.165165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:09.198 [2024-11-17 00:55:01.165171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:09.198 [2024-11-17 00:55:01.165178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:09.198 [2024-11-17 00:55:01.165185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:09.198 [2024-11-17 00:55:01.165193] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:09.198 [2024-11-17 00:55:01.165202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.198 [2024-11-17 00:55:01.165212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.198 [2024-11-17 00:55:01.165218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:09.198 [2024-11-17 00:55:01.165226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:09.198 [2024-11-17 00:55:01.165232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:09.198 [2024-11-17 00:55:01.165240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.198 [2024-11-17 00:55:01.165247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:09.198 [2024-11-17 00:55:01.165257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:23:09.198 [2024-11-17 00:55:01.165262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.198 [2024-11-17 00:55:01.165291] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:09.198 [2024-11-17 00:55:01.165298] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:12.497 [2024-11-17 00:55:04.434256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.434349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:12.497 [2024-11-17 00:55:04.434396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3268.941 ms 00:23:12.497 [2024-11-17 00:55:04.434410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.450215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.450272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.497 [2024-11-17 00:55:04.450293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.675 ms 00:23:12.497 [2024-11-17 00:55:04.450304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.450452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.450467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:12.497 [2024-11-17 00:55:04.450483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:12.497 [2024-11-17 00:55:04.450491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.463089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.463143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.497 [2024-11-17 00:55:04.463164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.530 ms 00:23:12.497 [2024-11-17 00:55:04.463173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.463211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.463224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.497 [2024-11-17 00:55:04.463236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:12.497 [2024-11-17 00:55:04.463245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.463830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.463875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.497 [2024-11-17 00:55:04.463891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:23:12.497 [2024-11-17 00:55:04.463900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.464038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.464052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.497 [2024-11-17 00:55:04.464066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:12.497 [2024-11-17 00:55:04.464080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.487771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.487873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.497 [2024-11-17 00:55:04.487912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.652 ms 00:23:12.497 [2024-11-17 00:55:04.487935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.497 [2024-11-17 00:55:04.498577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:12.497 [2024-11-17 00:55:04.502498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.497 [2024-11-17 00:55:04.502549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:12.497 [2024-11-17 00:55:04.502561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.317 ms 00:23:12.497 [2024-11-17 00:55:04.502572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.757 [2024-11-17 00:55:04.590178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.590248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:12.758 [2024-11-17 00:55:04.590263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.567 ms 00:23:12.758 [2024-11-17 00:55:04.590278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.590495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.590518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:12.758 [2024-11-17 00:55:04.590528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:23:12.758 [2024-11-17 00:55:04.590539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.597076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.597138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:12.758 [2024-11-17 00:55:04.597151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.514 ms 00:23:12.758 [2024-11-17 00:55:04.597162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.602639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.602694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:12.758 [2024-11-17 00:55:04.602706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.443 ms 00:23:12.758 [2024-11-17 00:55:04.602717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.603048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.603063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:12.758 [2024-11-17 00:55:04.603074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:23:12.758 [2024-11-17 00:55:04.603088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.646908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.646972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:12.758 [2024-11-17 00:55:04.646984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.781 ms 00:23:12.758 [2024-11-17 00:55:04.646996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.654391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.654440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:12.758 [2024-11-17 00:55:04.654451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.310 ms 00:23:12.758 [2024-11-17 00:55:04.654462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.660617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.660673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:12.758 [2024-11-17 00:55:04.660684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:23:12.758 [2024-11-17 00:55:04.660694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.667344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.667418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:12.758 [2024-11-17 00:55:04.667429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.587 ms 00:23:12.758 [2024-11-17 00:55:04.667443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.667499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.667512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:12.758 [2024-11-17 00:55:04.667522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:12.758 [2024-11-17 00:55:04.667533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.667609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.758 [2024-11-17 00:55:04.667623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:12.758 [2024-11-17 00:55:04.667634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:12.758 [2024-11-17 00:55:04.667654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.758 [2024-11-17 00:55:04.668834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3511.743 ms, result 0 00:23:12.758 { 00:23:12.758 "name": "ftl0", 00:23:12.758 "uuid": "7ce98c52-c64d-4fbe-a40a-a5fd6631e08d" 00:23:12.758 } 00:23:12.758 00:55:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:12.758 00:55:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:13.019 00:55:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:13.019 00:55:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:13.019 00:55:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:13.019 /dev/nbd0 00:23:13.019 00:55:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:13.019 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:13.019 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:13.019 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:13.019 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:13.020 1+0 records in 00:23:13.020 1+0 records out 00:23:13.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280462 s, 14.6 MB/s 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:13.020 00:55:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:13.281 [2024-11-17 00:55:05.109926] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:13.281 [2024-11-17 00:55:05.110048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89829 ] 00:23:13.281 [2024-11-17 00:55:05.256929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.281 [2024-11-17 00:55:05.327183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.667  [2024-11-17T00:55:07.669Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-17T00:55:08.605Z] Copying: 399/1024 [MB] (213 MBps) [2024-11-17T00:55:09.542Z] Copying: 659/1024 [MB] (259 MBps) [2024-11-17T00:55:10.109Z] Copying: 913/1024 [MB] (254 MBps) [2024-11-17T00:55:10.109Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:23:18.046 00:23:18.046 00:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:20.585 00:55:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:20.585 [2024-11-17 00:55:12.218931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:20.585 [2024-11-17 00:55:12.219216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89905 ] 00:23:20.585 [2024-11-17 00:55:12.356695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.585 [2024-11-17 00:55:12.401879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.528  [2024-11-17T00:55:14.532Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T00:55:15.474Z] Copying: 36/1024 [MB] (16 MBps) [2024-11-17T00:55:16.862Z] Copying: 50/1024 [MB] (14 MBps) [2024-11-17T00:55:17.805Z] Copying: 67/1024 [MB] (16 MBps) [2024-11-17T00:55:18.747Z] Copying: 86/1024 [MB] (19 MBps) [2024-11-17T00:55:19.690Z] Copying: 103/1024 [MB] (16 MBps) [2024-11-17T00:55:20.634Z] Copying: 117/1024 [MB] (14 MBps) [2024-11-17T00:55:21.576Z] Copying: 134/1024 [MB] (17 MBps) [2024-11-17T00:55:22.519Z] Copying: 158/1024 [MB] (23 MBps) [2024-11-17T00:55:23.904Z] Copying: 172/1024 [MB] (14 MBps) [2024-11-17T00:55:24.478Z] Copying: 187/1024 [MB] (14 MBps) [2024-11-17T00:55:25.863Z] Copying: 202/1024 [MB] (14 MBps) [2024-11-17T00:55:26.807Z] Copying: 215/1024 [MB] (13 MBps) [2024-11-17T00:55:27.752Z] Copying: 234/1024 [MB] (19 MBps) [2024-11-17T00:55:28.696Z] Copying: 252/1024 [MB] (17 MBps) [2024-11-17T00:55:29.640Z] Copying: 268/1024 [MB] (15 MBps) [2024-11-17T00:55:30.580Z] Copying: 280/1024 [MB] (12 MBps) [2024-11-17T00:55:31.626Z] Copying: 297/1024 [MB] (16 MBps) [2024-11-17T00:55:32.569Z] Copying: 313/1024 [MB] (16 MBps) [2024-11-17T00:55:33.513Z] Copying: 331/1024 [MB] (17 MBps) [2024-11-17T00:55:34.901Z] Copying: 346/1024 [MB] (15 MBps) [2024-11-17T00:55:35.472Z] Copying: 360/1024 [MB] (14 MBps) [2024-11-17T00:55:36.860Z] Copying: 377/1024 [MB] (16 MBps) [2024-11-17T00:55:37.802Z] Copying: 389/1024 [MB] (12 MBps) [2024-11-17T00:55:38.747Z] Copying: 400/1024 [MB] (10 MBps) [2024-11-17T00:55:39.700Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-17T00:55:40.640Z] Copying: 423/1024 [MB] (11 MBps) [2024-11-17T00:55:41.583Z] Copying: 440/1024 [MB] (16 MBps) [2024-11-17T00:55:42.527Z] Copying: 452/1024 [MB] (12 MBps) [2024-11-17T00:55:43.470Z] Copying: 464/1024 [MB] (11 MBps) [2024-11-17T00:55:44.856Z] Copying: 476/1024 [MB] (12 MBps) [2024-11-17T00:55:45.800Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-17T00:55:46.742Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-17T00:55:47.688Z] Copying: 514/1024 [MB] (17 MBps) [2024-11-17T00:55:48.634Z] Copying: 529/1024 [MB] (14 MBps) [2024-11-17T00:55:49.578Z] Copying: 542/1024 [MB] (13 MBps) [2024-11-17T00:55:50.520Z] Copying: 557/1024 [MB] (14 MBps) [2024-11-17T00:55:51.465Z] Copying: 574/1024 [MB] (17 MBps) [2024-11-17T00:55:52.849Z] Copying: 591/1024 [MB] (16 MBps) [2024-11-17T00:55:53.794Z] Copying: 612/1024 [MB] (21 MBps) [2024-11-17T00:55:54.738Z] Copying: 626/1024 [MB] (13 MBps) [2024-11-17T00:55:55.682Z] Copying: 642/1024 [MB] (16 MBps) [2024-11-17T00:55:56.626Z] Copying: 658/1024 [MB] (15 MBps) [2024-11-17T00:55:57.571Z] Copying: 680/1024 [MB] (21 MBps) [2024-11-17T00:55:58.514Z] Copying: 695/1024 [MB] (15 MBps) [2024-11-17T00:55:59.897Z] Copying: 712/1024 [MB] (16 MBps) [2024-11-17T00:56:00.470Z] Copying: 729/1024 [MB] (17 MBps) [2024-11-17T00:56:01.855Z] Copying: 745/1024 [MB] (15 MBps) [2024-11-17T00:56:02.795Z] Copying: 761/1024 [MB] (16 MBps) [2024-11-17T00:56:03.796Z] Copying: 780/1024 [MB] (18 MBps) [2024-11-17T00:56:04.737Z] Copying: 797/1024 [MB] (17 MBps) [2024-11-17T00:56:05.679Z] Copying: 815/1024 [MB] (18 MBps) [2024-11-17T00:56:06.628Z] Copying: 832/1024 [MB] (16 MBps) [2024-11-17T00:56:07.572Z] Copying: 848/1024 [MB] (16 MBps) [2024-11-17T00:56:08.512Z] Copying: 863/1024 [MB] (14 MBps) [2024-11-17T00:56:09.910Z] Copying: 884/1024 [MB] (20 MBps) [2024-11-17T00:56:10.482Z] Copying: 900/1024 [MB] (16 MBps) [2024-11-17T00:56:11.870Z] Copying: 919/1024 [MB] (18 MBps) [2024-11-17T00:56:12.816Z] Copying: 931/1024 [MB] (12 MBps) [2024-11-17T00:56:13.757Z] Copying: 945/1024 [MB] (13 MBps) [2024-11-17T00:56:14.695Z] Copying: 961/1024 [MB] (16 MBps) [2024-11-17T00:56:15.634Z] Copying: 979/1024 [MB] (18 MBps) [2024-11-17T00:56:16.576Z] Copying: 1006/1024 [MB] (26 MBps) [2024-11-17T00:56:16.836Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:24:24.773 00:24:24.773 00:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:24.773 00:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:25.034 00:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:25.034 [2024-11-17 00:56:17.083781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.034 [2024-11-17 00:56:17.083848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.034 [2024-11-17 00:56:17.083867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.034 [2024-11-17 00:56:17.083876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.034 [2024-11-17 00:56:17.083904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.034 [2024-11-17 00:56:17.084719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.034 [2024-11-17 00:56:17.084766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.034 [2024-11-17 00:56:17.084779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:24:25.034 [2024-11-17 00:56:17.084811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.034 [2024-11-17 00:56:17.087879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.034 [2024-11-17 00:56:17.087933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.034 [2024-11-17 00:56:17.087951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:24:25.034 [2024-11-17 00:56:17.087961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.106780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.106841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:25.298 [2024-11-17 00:56:17.106855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.799 ms 00:24:25.298 [2024-11-17 00:56:17.106865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.113053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.113101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:25.298 [2024-11-17 00:56:17.113113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:24:25.298 [2024-11-17 00:56:17.113129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.115914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.115972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.298 [2024-11-17 00:56:17.115983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:24:25.298 [2024-11-17 00:56:17.115993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.122729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.122789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.298 [2024-11-17 00:56:17.122814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.688 ms 00:24:25.298 [2024-11-17 00:56:17.122824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.122959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.122973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:25.298 [2024-11-17 00:56:17.122982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:25.298 [2024-11-17 00:56:17.122993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.126325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.126395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:25.298 [2024-11-17 00:56:17.126406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:24:25.298 [2024-11-17 00:56:17.126416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.129164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.129223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:25.298 [2024-11-17 00:56:17.129233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:24:25.298 [2024-11-17 00:56:17.129243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.131581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.131637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.298 [2024-11-17 00:56:17.131647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:24:25.298 [2024-11-17 00:56:17.131657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.134022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.298 [2024-11-17 00:56:17.134077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.298 [2024-11-17 00:56:17.134087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:24:25.298 [2024-11-17 00:56:17.134096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.298 [2024-11-17 00:56:17.134141] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.299 [2024-11-17 00:56:17.134160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.299 [2024-11-17 00:56:17.134866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.134991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.300 [2024-11-17 00:56:17.135120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.300 [2024-11-17 00:56:17.135128] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ce98c52-c64d-4fbe-a40a-a5fd6631e08d 00:24:25.300 [2024-11-17 00:56:17.135142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:25.300 [2024-11-17 00:56:17.135150] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:25.300 [2024-11-17 00:56:17.135164] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:25.300 [2024-11-17 00:56:17.135173] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:25.300 [2024-11-17 00:56:17.135182] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.300 [2024-11-17 00:56:17.135191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.300 [2024-11-17 00:56:17.135201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.300 [2024-11-17 00:56:17.135208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.300 [2024-11-17 00:56:17.135217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.300 [2024-11-17 00:56:17.135224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.300 [2024-11-17 00:56:17.135234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.300 [2024-11-17 00:56:17.135243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:24:25.300 [2024-11-17 00:56:17.135253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.137658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.300 [2024-11-17 00:56:17.137709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.300 [2024-11-17 00:56:17.137721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.384 ms 00:24:25.300 [2024-11-17 00:56:17.137732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.137861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.300 [2024-11-17 00:56:17.137873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.300 [2024-11-17 00:56:17.137883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:25.300 [2024-11-17 00:56:17.137894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.146123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.146178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.300 [2024-11-17 00:56:17.146190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.146201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.146269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.146281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.300 [2024-11-17 00:56:17.146289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.146300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.146402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.146419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.300 [2024-11-17 00:56:17.146427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.146437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.146456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.146466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.300 [2024-11-17 00:56:17.146479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.146489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.159810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.159873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.300 [2024-11-17 00:56:17.159884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.159894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.170507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.170560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.300 [2024-11-17 00:56:17.170571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.170581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.170655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.170671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.300 [2024-11-17 00:56:17.170680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.170690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.170786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.170799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.300 [2024-11-17 00:56:17.170807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.170817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.300 [2024-11-17 00:56:17.170892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.300 [2024-11-17 00:56:17.170908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.300 [2024-11-17 00:56:17.170916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.300 [2024-11-17 00:56:17.170926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.301 [2024-11-17 00:56:17.170958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.301 [2024-11-17 00:56:17.170969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.301 [2024-11-17 00:56:17.170977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.301 [2024-11-17 00:56:17.170988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.301 [2024-11-17 00:56:17.171029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.301 [2024-11-17 00:56:17.171059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.301 [2024-11-17 00:56:17.171068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.301 [2024-11-17 00:56:17.171078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.301 [2024-11-17 00:56:17.171130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.301 [2024-11-17 00:56:17.171161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.301 [2024-11-17 00:56:17.171170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.301 [2024-11-17 00:56:17.171179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.301 [2024-11-17 00:56:17.171323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.509 ms, result 0 00:24:25.301 true 00:24:25.301 00:56:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89693 00:24:25.301 00:56:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89693 00:24:25.301 00:56:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:25.301 [2024-11-17 00:56:17.267906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:25.301 [2024-11-17 00:56:17.268048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90585 ] 00:24:25.561 [2024-11-17 00:56:17.420718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.561 [2024-11-17 00:56:17.470178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.505  [2024-11-17T00:56:19.943Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-17T00:56:20.878Z] Copying: 422/1024 [MB] (233 MBps) [2024-11-17T00:56:21.814Z] Copying: 683/1024 [MB] (260 MBps) [2024-11-17T00:56:22.073Z] Copying: 940/1024 [MB] (256 MBps) [2024-11-17T00:56:22.073Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:24:30.010 00:24:30.010 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89693 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:30.010 00:56:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:30.268 [2024-11-17 00:56:22.073472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:30.268 [2024-11-17 00:56:22.073570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90640 ] 00:24:30.268 [2024-11-17 00:56:22.210903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.268 [2024-11-17 00:56:22.242277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.268 [2024-11-17 00:56:22.324272] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.268 [2024-11-17 00:56:22.324324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.530 [2024-11-17 00:56:22.385980] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:30.530 [2024-11-17 00:56:22.386255] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:30.530 [2024-11-17 00:56:22.386603] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:30.530 [2024-11-17 00:56:22.562636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.562669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:30.530 [2024-11-17 00:56:22.562681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:30.530 [2024-11-17 00:56:22.562687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.562719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.562728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.530 [2024-11-17 00:56:22.562734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:30.530 [2024-11-17 00:56:22.562739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.562755] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:30.530 [2024-11-17 00:56:22.562931] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:30.530 [2024-11-17 00:56:22.562942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.562947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.530 [2024-11-17 00:56:22.562954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:24:30.530 [2024-11-17 00:56:22.562961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.563854] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:30.530 [2024-11-17 00:56:22.565827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.565863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:30.530 [2024-11-17 00:56:22.565871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.973 ms 00:24:30.530 [2024-11-17 00:56:22.565877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.565918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.565925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:30.530 [2024-11-17 00:56:22.565932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:30.530 [2024-11-17 00:56:22.565937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.570265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.570291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.530 [2024-11-17 00:56:22.570298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.298 ms 00:24:30.530 [2024-11-17 00:56:22.570303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.570380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.570387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.530 [2024-11-17 00:56:22.570396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:30.530 [2024-11-17 00:56:22.570402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.570439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.570449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:30.530 [2024-11-17 00:56:22.570460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:30.530 [2024-11-17 00:56:22.570468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.570484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:30.530 [2024-11-17 00:56:22.571614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.571638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.530 [2024-11-17 00:56:22.571650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:24:30.530 [2024-11-17 00:56:22.571655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.571679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.571688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:30.530 [2024-11-17 00:56:22.571695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:30.530 [2024-11-17 00:56:22.571700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.571714] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:30.530 [2024-11-17 00:56:22.571727] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:30.530 [2024-11-17 00:56:22.571755] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:30.530 [2024-11-17 00:56:22.571770] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:30.530 [2024-11-17 00:56:22.571851] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:30.530 [2024-11-17 00:56:22.571863] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:30.530 [2024-11-17 00:56:22.571874] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:30.530 [2024-11-17 00:56:22.571882] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:30.530 [2024-11-17 00:56:22.571891] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:30.530 [2024-11-17 00:56:22.571897] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:30.530 [2024-11-17 00:56:22.571902] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:30.530 [2024-11-17 00:56:22.571908] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:30.530 [2024-11-17 00:56:22.571914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:30.530 [2024-11-17 00:56:22.571920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.571927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:30.530 [2024-11-17 00:56:22.571935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:24:30.530 [2024-11-17 00:56:22.571941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.572002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.530 [2024-11-17 00:56:22.572008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:30.530 [2024-11-17 00:56:22.572016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:30.530 [2024-11-17 00:56:22.572025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.530 [2024-11-17 00:56:22.572096] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:30.530 [2024-11-17 00:56:22.572103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:30.530 [2024-11-17 00:56:22.572111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.530 [2024-11-17 00:56:22.572118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.530 [2024-11-17 00:56:22.572123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:30.530 [2024-11-17 00:56:22.572128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:30.530 [2024-11-17 00:56:22.572133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:30.530 [2024-11-17 00:56:22.572138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:30.530 [2024-11-17 00:56:22.572144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:30.530 [2024-11-17 00:56:22.572149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.530 [2024-11-17 00:56:22.572154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:30.530 [2024-11-17 00:56:22.572159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:30.530 [2024-11-17 00:56:22.572164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.530 [2024-11-17 00:56:22.572169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:30.530 [2024-11-17 00:56:22.572179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:30.530 [2024-11-17 00:56:22.572184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.530 [2024-11-17 00:56:22.572189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:30.530 [2024-11-17 00:56:22.572194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:30.531 [2024-11-17 00:56:22.572199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:30.531 [2024-11-17 00:56:22.572209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.531 [2024-11-17 00:56:22.572220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:30.531 [2024-11-17 00:56:22.572225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.531 [2024-11-17 00:56:22.572235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:30.531 [2024-11-17 00:56:22.572240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.531 [2024-11-17 00:56:22.572249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:30.531 [2024-11-17 00:56:22.572254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.531 [2024-11-17 00:56:22.572269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:30.531 [2024-11-17 00:56:22.572275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.531 [2024-11-17 00:56:22.572286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:30.531 [2024-11-17 00:56:22.572292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:30.531 [2024-11-17 00:56:22.572297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.531 [2024-11-17 00:56:22.572303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:30.531 [2024-11-17 00:56:22.572309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:30.531 [2024-11-17 00:56:22.572314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:30.531 [2024-11-17 00:56:22.572325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:30.531 [2024-11-17 00:56:22.572331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572336] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:30.531 [2024-11-17 00:56:22.572343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:30.531 [2024-11-17 00:56:22.572351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.531 [2024-11-17 00:56:22.572369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.531 [2024-11-17 00:56:22.572376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:30.531 [2024-11-17 00:56:22.572382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:30.531 [2024-11-17 00:56:22.572388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:30.531 [2024-11-17 00:56:22.572394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:30.531 [2024-11-17 00:56:22.572400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:30.531 [2024-11-17 00:56:22.572406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:30.531 [2024-11-17 00:56:22.572413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:30.531 [2024-11-17 00:56:22.572420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.531 [2024-11-17 00:56:22.572428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:30.531 [2024-11-17 00:56:22.572434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:30.531 [2024-11-17 00:56:22.572440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:30.531 [2024-11-17 00:56:22.572447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:30.531 [2024-11-17 00:56:22.572453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:30.531 [2024-11-17 00:56:22.572459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:30.531 [2024-11-17 00:56:22.572466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:30.531 [2024-11-17 00:56:22.572473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:30.531 [2024-11-17 00:56:22.572479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:30.531 [2024-11-17 00:56:22.572486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:30.531 [2024-11-17 00:56:22.572492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:30.531 [2024-11-17 00:56:22.572498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:30.531 [2024-11-17 00:56:22.572504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:30.531 [2024-11-17 00:56:22.572510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:30.531 [2024-11-17 00:56:22.572516] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:30.531 [2024-11-17 00:56:22.572522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.531 [2024-11-17 00:56:22.572529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:30.531 [2024-11-17 00:56:22.572535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:30.531 [2024-11-17 00:56:22.572541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:30.531 [2024-11-17 00:56:22.572548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:30.531 [2024-11-17 00:56:22.572554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.531 [2024-11-17 00:56:22.572562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:30.531 [2024-11-17 00:56:22.572569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:24:30.531 [2024-11-17 00:56:22.572577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.596440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.596482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.791 [2024-11-17 00:56:22.596495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.828 ms 00:24:30.791 [2024-11-17 00:56:22.596504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.596606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.596616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:30.791 [2024-11-17 00:56:22.596628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:30.791 [2024-11-17 00:56:22.596636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.606410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.606454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.791 [2024-11-17 00:56:22.606469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.712 ms 00:24:30.791 [2024-11-17 00:56:22.606480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.606524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.606543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.791 [2024-11-17 00:56:22.606556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:30.791 [2024-11-17 00:56:22.606567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.606940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.606969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.791 [2024-11-17 00:56:22.606982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:24:30.791 [2024-11-17 00:56:22.606996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.607196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.607218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.791 [2024-11-17 00:56:22.607236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:24:30.791 [2024-11-17 00:56:22.607248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.611941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.611963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.791 [2024-11-17 00:56:22.611976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.663 ms 00:24:30.791 [2024-11-17 00:56:22.611982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.613936] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:30.791 [2024-11-17 00:56:22.613966] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:30.791 [2024-11-17 00:56:22.613975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.613981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:30.791 [2024-11-17 00:56:22.613989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.930 ms 00:24:30.791 [2024-11-17 00:56:22.613994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.628873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.628907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:30.791 [2024-11-17 00:56:22.628915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.850 ms 00:24:30.791 [2024-11-17 00:56:22.628921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.630272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.630295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:30.791 [2024-11-17 00:56:22.630302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:24:30.791 [2024-11-17 00:56:22.630308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.631547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.631569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:30.791 [2024-11-17 00:56:22.631575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:24:30.791 [2024-11-17 00:56:22.631580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.631838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.631852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:30.791 [2024-11-17 00:56:22.631861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:24:30.791 [2024-11-17 00:56:22.631868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.645522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.645556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:30.791 [2024-11-17 00:56:22.645565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.642 ms 00:24:30.791 [2024-11-17 00:56:22.645572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.651489] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:30.791 [2024-11-17 00:56:22.653646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.653670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:30.791 [2024-11-17 00:56:22.653678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.015 ms 00:24:30.791 [2024-11-17 00:56:22.653685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.653730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.653738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:30.791 [2024-11-17 00:56:22.653745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:30.791 [2024-11-17 00:56:22.653752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.653810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.653818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:30.791 [2024-11-17 00:56:22.653825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:30.791 [2024-11-17 00:56:22.653830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.653844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.653850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:30.791 [2024-11-17 00:56:22.653856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:30.791 [2024-11-17 00:56:22.653861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.653887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:30.791 [2024-11-17 00:56:22.653896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.653901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:30.791 [2024-11-17 00:56:22.653907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:30.791 [2024-11-17 00:56:22.653912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.656805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.791 [2024-11-17 00:56:22.656828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:30.791 [2024-11-17 00:56:22.656835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:24:30.791 [2024-11-17 00:56:22.656841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.791 [2024-11-17 00:56:22.656896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.792 [2024-11-17 00:56:22.656906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:30.792 [2024-11-17 00:56:22.656912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:30.792 [2024-11-17 00:56:22.656917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.792 [2024-11-17 00:56:22.657717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 94.762 ms, result 0 00:24:31.726  [2024-11-17T00:56:24.726Z] Copying: 50/1024 [MB] (50 MBps) [2024-11-17T00:56:25.676Z] Copying: 94/1024 [MB] (44 MBps) [2024-11-17T00:56:27.059Z] Copying: 109/1024 [MB] (14 MBps) [2024-11-17T00:56:28.011Z] Copying: 130/1024 [MB] (21 MBps) [2024-11-17T00:56:28.954Z] Copying: 160/1024 [MB] (29 MBps) [2024-11-17T00:56:29.896Z] Copying: 181/1024 [MB] (21 MBps) [2024-11-17T00:56:30.833Z] Copying: 197/1024 [MB] (16 MBps) [2024-11-17T00:56:31.775Z] Copying: 244/1024 [MB] (46 MBps) [2024-11-17T00:56:32.718Z] Copying: 272/1024 [MB] (27 MBps) [2024-11-17T00:56:34.143Z] Copying: 295/1024 [MB] (23 MBps) [2024-11-17T00:56:34.777Z] Copying: 318/1024 [MB] (22 MBps) [2024-11-17T00:56:35.719Z] Copying: 337/1024 [MB] (18 MBps) [2024-11-17T00:56:37.104Z] Copying: 359/1024 [MB] (22 MBps) [2024-11-17T00:56:37.675Z] Copying: 377/1024 [MB] (18 MBps) [2024-11-17T00:56:39.064Z] Copying: 396/1024 [MB] (19 MBps) [2024-11-17T00:56:40.002Z] Copying: 410/1024 [MB] (13 MBps) [2024-11-17T00:56:40.943Z] Copying: 430/1024 [MB] (19 MBps) [2024-11-17T00:56:41.888Z] Copying: 444/1024 [MB] (13 MBps) [2024-11-17T00:56:42.830Z] Copying: 459/1024 [MB] (15 MBps) [2024-11-17T00:56:43.776Z] Copying: 477/1024 [MB] (18 MBps) [2024-11-17T00:56:44.719Z] Copying: 491/1024 [MB] (13 MBps) [2024-11-17T00:56:46.107Z] Copying: 513380/1048576 [kB] (10020 kBps) [2024-11-17T00:56:46.682Z] Copying: 523596/1048576 [kB] (10216 kBps) [2024-11-17T00:56:48.074Z] Copying: 533776/1048576 [kB] (10180 kBps) [2024-11-17T00:56:49.020Z] Copying: 531/1024 [MB] (10 MBps) [2024-11-17T00:56:49.964Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-17T00:56:50.910Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-17T00:56:51.855Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-17T00:56:52.799Z] Copying: 573/1024 [MB] (10 MBps) [2024-11-17T00:56:53.742Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-17T00:56:54.685Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-17T00:56:56.073Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-17T00:56:57.015Z] Copying: 614/1024 [MB] (10 MBps) [2024-11-17T00:56:57.960Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-17T00:56:58.902Z] Copying: 635/1024 [MB] (10 MBps) [2024-11-17T00:56:59.843Z] Copying: 645/1024 [MB] (10 MBps) [2024-11-17T00:57:00.797Z] Copying: 656/1024 [MB] (10 MBps) [2024-11-17T00:57:01.740Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-17T00:57:02.675Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-17T00:57:04.058Z] Copying: 711/1024 [MB] (34 MBps) [2024-11-17T00:57:05.000Z] Copying: 729/1024 [MB] (17 MBps) [2024-11-17T00:57:05.984Z] Copying: 746/1024 [MB] (17 MBps) [2024-11-17T00:57:06.951Z] Copying: 767/1024 [MB] (20 MBps) [2024-11-17T00:57:07.893Z] Copying: 783/1024 [MB] (16 MBps) [2024-11-17T00:57:08.835Z] Copying: 803/1024 [MB] (20 MBps) [2024-11-17T00:57:09.777Z] Copying: 819/1024 [MB] (15 MBps) [2024-11-17T00:57:10.717Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-17T00:57:12.092Z] Copying: 845/1024 [MB] (15 MBps) [2024-11-17T00:57:13.034Z] Copying: 871/1024 [MB] (26 MBps) [2024-11-17T00:57:13.975Z] Copying: 884/1024 [MB] (13 MBps) [2024-11-17T00:57:14.923Z] Copying: 904/1024 [MB] (19 MBps) [2024-11-17T00:57:15.873Z] Copying: 919/1024 [MB] (14 MBps) [2024-11-17T00:57:16.820Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-17T00:57:17.763Z] Copying: 944/1024 [MB] (14 MBps) [2024-11-17T00:57:18.707Z] Copying: 960/1024 [MB] (16 MBps) [2024-11-17T00:57:20.092Z] Copying: 978/1024 [MB] (18 MBps) [2024-11-17T00:57:21.039Z] Copying: 991/1024 [MB] (13 MBps) [2024-11-17T00:57:21.985Z] Copying: 1009/1024 [MB] (17 MBps) [2024-11-17T00:57:22.929Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-17T00:57:22.929Z] Copying: 1048372/1048576 [kB] (3820 kBps) [2024-11-17T00:57:22.929Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-17 00:57:22.879092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.866 [2024-11-17 00:57:22.879170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:30.866 [2024-11-17 00:57:22.879187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.866 [2024-11-17 00:57:22.879197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.866 [2024-11-17 00:57:22.880983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:30.867 [2024-11-17 00:57:22.886342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.867 [2024-11-17 00:57:22.886411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:30.867 [2024-11-17 00:57:22.886424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.295 ms 00:25:30.867 [2024-11-17 00:57:22.886433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.867 [2024-11-17 00:57:22.897695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.867 [2024-11-17 00:57:22.897744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:30.867 [2024-11-17 00:57:22.897756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.981 ms 00:25:30.867 [2024-11-17 00:57:22.897764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.867 [2024-11-17 00:57:22.920057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.867 [2024-11-17 00:57:22.920108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:30.867 [2024-11-17 00:57:22.920127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.275 ms 00:25:30.867 [2024-11-17 00:57:22.920136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.867 [2024-11-17 00:57:22.926311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.867 [2024-11-17 00:57:22.926377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:30.867 [2024-11-17 00:57:22.926390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:25:30.867 [2024-11-17 00:57:22.926398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:22.929174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.130 [2024-11-17 00:57:22.929222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:31.130 [2024-11-17 00:57:22.929232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:25:31.130 [2024-11-17 00:57:22.929240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:22.934006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.130 [2024-11-17 00:57:22.934066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:31.130 [2024-11-17 00:57:22.934077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:25:31.130 [2024-11-17 00:57:22.934085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:23.130086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.130 [2024-11-17 00:57:23.130149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:31.130 [2024-11-17 00:57:23.130164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 195.955 ms 00:25:31.130 [2024-11-17 00:57:23.130172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:23.132673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.130 [2024-11-17 00:57:23.132721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:31.130 [2024-11-17 00:57:23.132733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:25:31.130 [2024-11-17 00:57:23.132740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:23.134780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.130 [2024-11-17 00:57:23.134835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:31.130 [2024-11-17 00:57:23.134845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:25:31.130 [2024-11-17 00:57:23.134853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:23.136495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.130 [2024-11-17 00:57:23.136540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:31.130 [2024-11-17 00:57:23.136550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:25:31.130 [2024-11-17 00:57:23.136557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:23.138147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.130 [2024-11-17 00:57:23.138194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:31.130 [2024-11-17 00:57:23.138203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:25:31.130 [2024-11-17 00:57:23.138211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.130 [2024-11-17 00:57:23.138246] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:31.130 [2024-11-17 00:57:23.138260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100608 / 261120 wr_cnt: 1 state: open 00:25:31.130 [2024-11-17 00:57:23.138272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:31.130 [2024-11-17 00:57:23.138457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.138999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:31.131 [2024-11-17 00:57:23.139092] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:31.131 [2024-11-17 00:57:23.139110] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ce98c52-c64d-4fbe-a40a-a5fd6631e08d 00:25:31.131 [2024-11-17 00:57:23.139124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100608 00:25:31.131 [2024-11-17 00:57:23.139133] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101568 00:25:31.131 [2024-11-17 00:57:23.139147] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100608 00:25:31.131 [2024-11-17 00:57:23.139156] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:25:31.131 [2024-11-17 00:57:23.139164] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:31.131 [2024-11-17 00:57:23.139173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:31.131 [2024-11-17 00:57:23.139185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:31.131 [2024-11-17 00:57:23.139192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:31.131 [2024-11-17 00:57:23.139199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:31.131 [2024-11-17 00:57:23.139214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.131 [2024-11-17 00:57:23.139221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:31.131 [2024-11-17 00:57:23.139230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:25:31.132 [2024-11-17 00:57:23.139237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.141656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.132 [2024-11-17 00:57:23.141693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:31.132 [2024-11-17 00:57:23.141703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:25:31.132 [2024-11-17 00:57:23.141712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.141847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.132 [2024-11-17 00:57:23.141857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:31.132 [2024-11-17 00:57:23.141875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:31.132 [2024-11-17 00:57:23.141883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.148586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.148634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.132 [2024-11-17 00:57:23.148645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.148653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.148708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.148717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.132 [2024-11-17 00:57:23.148730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.148738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.148781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.148791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.132 [2024-11-17 00:57:23.148800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.148807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.148849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.148857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.132 [2024-11-17 00:57:23.148865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.148878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.161963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.162014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.132 [2024-11-17 00:57:23.162035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.162043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.171836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.171888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.132 [2024-11-17 00:57:23.171906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.171915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.171964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.171974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.132 [2024-11-17 00:57:23.171983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.171991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.172023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.172032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.132 [2024-11-17 00:57:23.172042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.172051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.172122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.172132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.132 [2024-11-17 00:57:23.172140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.172148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.172189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.172199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:31.132 [2024-11-17 00:57:23.172208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.172220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.172261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.172270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.132 [2024-11-17 00:57:23.172279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.172287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.172333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.132 [2024-11-17 00:57:23.172344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.132 [2024-11-17 00:57:23.172425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.132 [2024-11-17 00:57:23.172435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.132 [2024-11-17 00:57:23.172570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 295.025 ms, result 0 00:25:32.076 00:25:32.076 00:25:32.338 00:57:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:34.886 00:57:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:34.886 [2024-11-17 00:57:26.457558] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:34.886 [2024-11-17 00:57:26.457703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91293 ] 00:25:34.886 [2024-11-17 00:57:26.604532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.886 [2024-11-17 00:57:26.654761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.886 [2024-11-17 00:57:26.769790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.886 [2024-11-17 00:57:26.769875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.886 [2024-11-17 00:57:26.931150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.931213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:34.886 [2024-11-17 00:57:26.931233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.886 [2024-11-17 00:57:26.931242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.931302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.931313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.886 [2024-11-17 00:57:26.931322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:34.886 [2024-11-17 00:57:26.931333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.931376] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:34.886 [2024-11-17 00:57:26.931764] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:34.886 [2024-11-17 00:57:26.931808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.931822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.886 [2024-11-17 00:57:26.931836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:25:34.886 [2024-11-17 00:57:26.931848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.933587] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:34.886 [2024-11-17 00:57:26.937283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.937333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:34.886 [2024-11-17 00:57:26.937343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.698 ms 00:25:34.886 [2024-11-17 00:57:26.937369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.937454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.937464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:34.886 [2024-11-17 00:57:26.937473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:34.886 [2024-11-17 00:57:26.937481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.945831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.945874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.886 [2024-11-17 00:57:26.945885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.307 ms 00:25:34.886 [2024-11-17 00:57:26.945904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.946002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.946012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.886 [2024-11-17 00:57:26.946027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:34.886 [2024-11-17 00:57:26.946036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.946094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.886 [2024-11-17 00:57:26.946105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:34.886 [2024-11-17 00:57:26.946113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:34.886 [2024-11-17 00:57:26.946126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.886 [2024-11-17 00:57:26.946155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.149 [2024-11-17 00:57:26.948209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.149 [2024-11-17 00:57:26.948252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.149 [2024-11-17 00:57:26.948262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:25:35.149 [2024-11-17 00:57:26.948270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.149 [2024-11-17 00:57:26.948304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.149 [2024-11-17 00:57:26.948312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.149 [2024-11-17 00:57:26.948324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:35.149 [2024-11-17 00:57:26.948332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.149 [2024-11-17 00:57:26.948383] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.149 [2024-11-17 00:57:26.948408] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.149 [2024-11-17 00:57:26.948445] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.149 [2024-11-17 00:57:26.948462] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.149 [2024-11-17 00:57:26.948572] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.149 [2024-11-17 00:57:26.948594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.149 [2024-11-17 00:57:26.948606] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.149 [2024-11-17 00:57:26.948616] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.149 [2024-11-17 00:57:26.948632] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.149 [2024-11-17 00:57:26.948640] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.149 [2024-11-17 00:57:26.948647] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.149 [2024-11-17 00:57:26.948655] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.149 [2024-11-17 00:57:26.948664] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.149 [2024-11-17 00:57:26.948671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.150 [2024-11-17 00:57:26.948682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.150 [2024-11-17 00:57:26.948690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:25:35.150 [2024-11-17 00:57:26.948697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.150 [2024-11-17 00:57:26.948779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.150 [2024-11-17 00:57:26.948790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.150 [2024-11-17 00:57:26.948798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:35.150 [2024-11-17 00:57:26.948805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.150 [2024-11-17 00:57:26.948929] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.150 [2024-11-17 00:57:26.948941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.150 [2024-11-17 00:57:26.948950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.150 [2024-11-17 00:57:26.948965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.150 [2024-11-17 00:57:26.948974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.150 [2024-11-17 00:57:26.948982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.150 [2024-11-17 00:57:26.948990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.150 [2024-11-17 00:57:26.948999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.150 [2024-11-17 00:57:26.949007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.150 [2024-11-17 00:57:26.949029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.150 [2024-11-17 00:57:26.949037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.150 [2024-11-17 00:57:26.949045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.150 [2024-11-17 00:57:26.949056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.150 [2024-11-17 00:57:26.949065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.150 [2024-11-17 00:57:26.949074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.150 [2024-11-17 00:57:26.949090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.150 [2024-11-17 00:57:26.949098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.150 [2024-11-17 00:57:26.949115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.150 [2024-11-17 00:57:26.949131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.150 [2024-11-17 00:57:26.949140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.150 [2024-11-17 00:57:26.949158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.150 [2024-11-17 00:57:26.949166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.150 [2024-11-17 00:57:26.949190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.150 [2024-11-17 00:57:26.949198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.150 [2024-11-17 00:57:26.949214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.150 [2024-11-17 00:57:26.949222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.150 [2024-11-17 00:57:26.949237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.150 [2024-11-17 00:57:26.949245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.150 [2024-11-17 00:57:26.949252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.150 [2024-11-17 00:57:26.949260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.150 [2024-11-17 00:57:26.949267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.150 [2024-11-17 00:57:26.949276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.150 [2024-11-17 00:57:26.949296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.150 [2024-11-17 00:57:26.949304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949312] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.150 [2024-11-17 00:57:26.949321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.150 [2024-11-17 00:57:26.949331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.150 [2024-11-17 00:57:26.949341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.150 [2024-11-17 00:57:26.949350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.150 [2024-11-17 00:57:26.949380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.150 [2024-11-17 00:57:26.949388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.150 [2024-11-17 00:57:26.949395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.150 [2024-11-17 00:57:26.949402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.150 [2024-11-17 00:57:26.949409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.150 [2024-11-17 00:57:26.949418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.150 [2024-11-17 00:57:26.949428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.150 [2024-11-17 00:57:26.949437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.150 [2024-11-17 00:57:26.949444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.150 [2024-11-17 00:57:26.949455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.150 [2024-11-17 00:57:26.949462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.150 [2024-11-17 00:57:26.949470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.150 [2024-11-17 00:57:26.949478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.150 [2024-11-17 00:57:26.949484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.150 [2024-11-17 00:57:26.949492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.150 [2024-11-17 00:57:26.949500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.150 [2024-11-17 00:57:26.949507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.150 [2024-11-17 00:57:26.949514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.150 [2024-11-17 00:57:26.949522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.150 [2024-11-17 00:57:26.949530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.150 [2024-11-17 00:57:26.949537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.150 [2024-11-17 00:57:26.949545] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.150 [2024-11-17 00:57:26.949554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.150 [2024-11-17 00:57:26.949563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.150 [2024-11-17 00:57:26.949571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.150 [2024-11-17 00:57:26.949581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.150 [2024-11-17 00:57:26.949588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.150 [2024-11-17 00:57:26.949597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.150 [2024-11-17 00:57:26.949605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.150 [2024-11-17 00:57:26.949615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:25:35.150 [2024-11-17 00:57:26.949623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.150 [2024-11-17 00:57:26.974202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.150 [2024-11-17 00:57:26.974277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.150 [2024-11-17 00:57:26.974297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.527 ms 00:25:35.150 [2024-11-17 00:57:26.974310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.150 [2024-11-17 00:57:26.974476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.150 [2024-11-17 00:57:26.974493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.150 [2024-11-17 00:57:26.974507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:35.150 [2024-11-17 00:57:26.974519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.150 [2024-11-17 00:57:26.987567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.150 [2024-11-17 00:57:26.987617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.150 [2024-11-17 00:57:26.987636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.957 ms 00:25:35.150 [2024-11-17 00:57:26.987648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.150 [2024-11-17 00:57:26.987683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:26.987692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.151 [2024-11-17 00:57:26.987704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:35.151 [2024-11-17 00:57:26.987712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:26.988269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:26.988319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.151 [2024-11-17 00:57:26.988331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:25:35.151 [2024-11-17 00:57:26.988339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:26.988515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:26.988526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.151 [2024-11-17 00:57:26.988536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:25:35.151 [2024-11-17 00:57:26.988545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:26.995338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:26.995410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.151 [2024-11-17 00:57:26.995428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.763 ms 00:25:35.151 [2024-11-17 00:57:26.995442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:26.999244] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:35.151 [2024-11-17 00:57:26.999298] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.151 [2024-11-17 00:57:26.999310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:26.999319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.151 [2024-11-17 00:57:26.999328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.776 ms 00:25:35.151 [2024-11-17 00:57:26.999336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.015268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.015316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.151 [2024-11-17 00:57:27.015337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.848 ms 00:25:35.151 [2024-11-17 00:57:27.015346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.018132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.018178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.151 [2024-11-17 00:57:27.018190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:25:35.151 [2024-11-17 00:57:27.018198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.021882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.021971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.151 [2024-11-17 00:57:27.022009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:25:35.151 [2024-11-17 00:57:27.022028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.022785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.022847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.151 [2024-11-17 00:57:27.022871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:25:35.151 [2024-11-17 00:57:27.022890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.048044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.048119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.151 [2024-11-17 00:57:27.048133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.116 ms 00:25:35.151 [2024-11-17 00:57:27.048141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.056461] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:35.151 [2024-11-17 00:57:27.059914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.059963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.151 [2024-11-17 00:57:27.059979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.719 ms 00:25:35.151 [2024-11-17 00:57:27.059987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.060067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.060082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:35.151 [2024-11-17 00:57:27.060092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:35.151 [2024-11-17 00:57:27.060104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.061917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.061963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.151 [2024-11-17 00:57:27.061979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:25:35.151 [2024-11-17 00:57:27.061990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.062020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.062029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.151 [2024-11-17 00:57:27.062037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:35.151 [2024-11-17 00:57:27.062045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.062085] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:35.151 [2024-11-17 00:57:27.062095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.062106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:35.151 [2024-11-17 00:57:27.062114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:35.151 [2024-11-17 00:57:27.062122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.067715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.067764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.151 [2024-11-17 00:57:27.067775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.571 ms 00:25:35.151 [2024-11-17 00:57:27.067791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.067875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.151 [2024-11-17 00:57:27.067885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.151 [2024-11-17 00:57:27.067899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:35.151 [2024-11-17 00:57:27.067908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.151 [2024-11-17 00:57:27.069673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.020 ms, result 0 00:25:36.538  [2024-11-17T00:57:29.546Z] Copying: 1000/1048576 [kB] (1000 kBps) [2024-11-17T00:57:30.492Z] Copying: 4172/1048576 [kB] (3172 kBps) [2024-11-17T00:57:31.437Z] Copying: 14/1024 [MB] (10 MBps) [2024-11-17T00:57:32.381Z] Copying: 30/1024 [MB] (16 MBps) [2024-11-17T00:57:33.325Z] Copying: 46/1024 [MB] (16 MBps) [2024-11-17T00:57:34.269Z] Copying: 76/1024 [MB] (29 MBps) [2024-11-17T00:57:35.660Z] Copying: 106/1024 [MB] (30 MBps) [2024-11-17T00:57:36.604Z] Copying: 133/1024 [MB] (27 MBps) [2024-11-17T00:57:37.632Z] Copying: 160/1024 [MB] (26 MBps) [2024-11-17T00:57:38.579Z] Copying: 184/1024 [MB] (24 MBps) [2024-11-17T00:57:39.518Z] Copying: 213/1024 [MB] (29 MBps) [2024-11-17T00:57:40.464Z] Copying: 244/1024 [MB] (31 MBps) [2024-11-17T00:57:41.409Z] Copying: 264/1024 [MB] (19 MBps) [2024-11-17T00:57:42.351Z] Copying: 283/1024 [MB] (19 MBps) [2024-11-17T00:57:43.296Z] Copying: 302/1024 [MB] (19 MBps) [2024-11-17T00:57:44.680Z] Copying: 323/1024 [MB] (20 MBps) [2024-11-17T00:57:45.620Z] Copying: 342/1024 [MB] (19 MBps) [2024-11-17T00:57:46.563Z] Copying: 364/1024 [MB] (22 MBps) [2024-11-17T00:57:47.499Z] Copying: 380/1024 [MB] (16 MBps) [2024-11-17T00:57:48.444Z] Copying: 407/1024 [MB] (26 MBps) [2024-11-17T00:57:49.390Z] Copying: 432/1024 [MB] (25 MBps) [2024-11-17T00:57:50.336Z] Copying: 460/1024 [MB] (27 MBps) [2024-11-17T00:57:51.282Z] Copying: 483/1024 [MB] (23 MBps) [2024-11-17T00:57:52.666Z] Copying: 513/1024 [MB] (29 MBps) [2024-11-17T00:57:53.609Z] Copying: 537/1024 [MB] (24 MBps) [2024-11-17T00:57:54.550Z] Copying: 553/1024 [MB] (16 MBps) [2024-11-17T00:57:55.495Z] Copying: 579/1024 [MB] (25 MBps) [2024-11-17T00:57:56.440Z] Copying: 604/1024 [MB] (25 MBps) [2024-11-17T00:57:57.385Z] Copying: 620/1024 [MB] (15 MBps) [2024-11-17T00:57:58.329Z] Copying: 639/1024 [MB] (19 MBps) [2024-11-17T00:57:59.271Z] Copying: 662/1024 [MB] (22 MBps) [2024-11-17T00:58:00.653Z] Copying: 680/1024 [MB] (17 MBps) [2024-11-17T00:58:01.598Z] Copying: 705/1024 [MB] (25 MBps) [2024-11-17T00:58:02.543Z] Copying: 729/1024 [MB] (23 MBps) [2024-11-17T00:58:03.484Z] Copying: 760/1024 [MB] (30 MBps) [2024-11-17T00:58:04.422Z] Copying: 793/1024 [MB] (33 MBps) [2024-11-17T00:58:05.366Z] Copying: 827/1024 [MB] (33 MBps) [2024-11-17T00:58:06.309Z] Copying: 854/1024 [MB] (27 MBps) [2024-11-17T00:58:07.250Z] Copying: 883/1024 [MB] (28 MBps) [2024-11-17T00:58:08.636Z] Copying: 909/1024 [MB] (25 MBps) [2024-11-17T00:58:09.645Z] Copying: 936/1024 [MB] (27 MBps) [2024-11-17T00:58:10.589Z] Copying: 961/1024 [MB] (24 MBps) [2024-11-17T00:58:11.531Z] Copying: 987/1024 [MB] (26 MBps) [2024-11-17T00:58:12.475Z] Copying: 1002/1024 [MB] (15 MBps) [2024-11-17T00:58:12.738Z] Copying: 1019/1024 [MB] (16 MBps) [2024-11-17T00:58:12.738Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-17 00:58:12.728640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.675 [2024-11-17 00:58:12.729030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:20.675 [2024-11-17 00:58:12.729141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:20.675 [2024-11-17 00:58:12.729178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.675 [2024-11-17 00:58:12.729243] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:20.675 [2024-11-17 00:58:12.730230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.675 [2024-11-17 00:58:12.730427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:20.675 [2024-11-17 00:58:12.730536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:26:20.675 [2024-11-17 00:58:12.730581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.675 [2024-11-17 00:58:12.730934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.675 [2024-11-17 00:58:12.730971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:20.675 [2024-11-17 00:58:12.731001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:26:20.675 [2024-11-17 00:58:12.731085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.745519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.745704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:20.937 [2024-11-17 00:58:12.745899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.387 ms 00:26:20.937 [2024-11-17 00:58:12.745952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.752230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.752388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:20.937 [2024-11-17 00:58:12.752572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.217 ms 00:26:20.937 [2024-11-17 00:58:12.752612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.755281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.755457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:20.937 [2024-11-17 00:58:12.755677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:26:20.937 [2024-11-17 00:58:12.755717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.761190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.761342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:20.937 [2024-11-17 00:58:12.761412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.414 ms 00:26:20.937 [2024-11-17 00:58:12.761444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.766175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.766316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:20.937 [2024-11-17 00:58:12.766390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:26:20.937 [2024-11-17 00:58:12.766416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.769587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.769729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:20.937 [2024-11-17 00:58:12.769781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:26:20.937 [2024-11-17 00:58:12.769803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.772635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.772803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:20.937 [2024-11-17 00:58:12.772874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:26:20.937 [2024-11-17 00:58:12.772897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.775214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.775370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:20.937 [2024-11-17 00:58:12.775423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:26:20.937 [2024-11-17 00:58:12.775444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.777695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.937 [2024-11-17 00:58:12.777836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:20.937 [2024-11-17 00:58:12.777888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:26:20.937 [2024-11-17 00:58:12.777910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.937 [2024-11-17 00:58:12.777995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:20.937 [2024-11-17 00:58:12.778037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:20.937 [2024-11-17 00:58:12.778060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:20.937 [2024-11-17 00:58:12.778069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:20.938 [2024-11-17 00:58:12.778714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:20.939 [2024-11-17 00:58:12.778876] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:20.939 [2024-11-17 00:58:12.778885] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ce98c52-c64d-4fbe-a40a-a5fd6631e08d 00:26:20.939 [2024-11-17 00:58:12.778894] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:20.939 [2024-11-17 00:58:12.778911] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 164032 00:26:20.939 [2024-11-17 00:58:12.778919] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 162048 00:26:20.939 [2024-11-17 00:58:12.778928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:26:20.939 [2024-11-17 00:58:12.778945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:20.939 [2024-11-17 00:58:12.778953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:20.939 [2024-11-17 00:58:12.778961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:20.939 [2024-11-17 00:58:12.778968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:20.939 [2024-11-17 00:58:12.778975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:20.939 [2024-11-17 00:58:12.778983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.939 [2024-11-17 00:58:12.778995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:20.939 [2024-11-17 00:58:12.779005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:26:20.939 [2024-11-17 00:58:12.779013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.781372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.939 [2024-11-17 00:58:12.781417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:20.939 [2024-11-17 00:58:12.781428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.320 ms 00:26:20.939 [2024-11-17 00:58:12.781436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.781559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.939 [2024-11-17 00:58:12.781568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:20.939 [2024-11-17 00:58:12.781577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:26:20.939 [2024-11-17 00:58:12.781589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.788170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.788225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:20.939 [2024-11-17 00:58:12.788246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.788254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.788310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.788319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:20.939 [2024-11-17 00:58:12.788327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.788339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.788439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.788451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:20.939 [2024-11-17 00:58:12.788460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.788468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.788485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.788493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:20.939 [2024-11-17 00:58:12.788500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.788508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.801656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.801708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:20.939 [2024-11-17 00:58:12.801720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.801729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.811811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.811860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:20.939 [2024-11-17 00:58:12.811871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.811879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.811933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.811943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:20.939 [2024-11-17 00:58:12.811952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.811960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.811993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.812003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:20.939 [2024-11-17 00:58:12.812011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.812025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.812092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.812105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:20.939 [2024-11-17 00:58:12.812114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.812121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.812150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.812160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:20.939 [2024-11-17 00:58:12.812168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.812176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.812214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.812227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:20.939 [2024-11-17 00:58:12.812235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.812247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.812291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:20.939 [2024-11-17 00:58:12.812301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:20.939 [2024-11-17 00:58:12.812310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:20.939 [2024-11-17 00:58:12.812317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.939 [2024-11-17 00:58:12.812505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.836 ms, result 0 00:26:21.200 00:26:21.200 00:26:21.200 00:58:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:23.750 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:23.750 00:58:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:23.750 [2024-11-17 00:58:15.230144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:23.750 [2024-11-17 00:58:15.230250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91788 ] 00:26:23.750 [2024-11-17 00:58:15.376518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.750 [2024-11-17 00:58:15.409779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.750 [2024-11-17 00:58:15.500569] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:23.750 [2024-11-17 00:58:15.500644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:23.750 [2024-11-17 00:58:15.657132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.657185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:23.750 [2024-11-17 00:58:15.657203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:23.750 [2024-11-17 00:58:15.657212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.657262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.657273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.750 [2024-11-17 00:58:15.657282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:23.750 [2024-11-17 00:58:15.657290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.657309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:23.750 [2024-11-17 00:58:15.657860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:23.750 [2024-11-17 00:58:15.657917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.657929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.750 [2024-11-17 00:58:15.657941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:26:23.750 [2024-11-17 00:58:15.657952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.659462] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:23.750 [2024-11-17 00:58:15.662961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.663007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:23.750 [2024-11-17 00:58:15.663019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:26:23.750 [2024-11-17 00:58:15.663027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.663103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.663116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:23.750 [2024-11-17 00:58:15.663125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:23.750 [2024-11-17 00:58:15.663133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.670506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.670556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.750 [2024-11-17 00:58:15.670570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.326 ms 00:26:23.750 [2024-11-17 00:58:15.670581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.670679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.670690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.750 [2024-11-17 00:58:15.670698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:23.750 [2024-11-17 00:58:15.670709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.670752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.670762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:23.750 [2024-11-17 00:58:15.670771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:23.750 [2024-11-17 00:58:15.670783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.670812] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:23.750 [2024-11-17 00:58:15.672649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.672686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.750 [2024-11-17 00:58:15.672696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:26:23.750 [2024-11-17 00:58:15.672704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.672736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.750 [2024-11-17 00:58:15.672744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:23.750 [2024-11-17 00:58:15.672753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:23.750 [2024-11-17 00:58:15.672768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.750 [2024-11-17 00:58:15.672805] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:23.750 [2024-11-17 00:58:15.672828] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:23.750 [2024-11-17 00:58:15.672882] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:23.750 [2024-11-17 00:58:15.672898] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:23.750 [2024-11-17 00:58:15.673004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:23.750 [2024-11-17 00:58:15.673016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:23.750 [2024-11-17 00:58:15.673027] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:23.751 [2024-11-17 00:58:15.673042] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673054] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673063] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:23.751 [2024-11-17 00:58:15.673071] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:23.751 [2024-11-17 00:58:15.673078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:23.751 [2024-11-17 00:58:15.673086] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:23.751 [2024-11-17 00:58:15.673094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.751 [2024-11-17 00:58:15.673102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:23.751 [2024-11-17 00:58:15.673114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:26:23.751 [2024-11-17 00:58:15.673124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.751 [2024-11-17 00:58:15.673207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.751 [2024-11-17 00:58:15.673219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:23.751 [2024-11-17 00:58:15.673232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:23.751 [2024-11-17 00:58:15.673244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.751 [2024-11-17 00:58:15.673375] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:23.751 [2024-11-17 00:58:15.673393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:23.751 [2024-11-17 00:58:15.673403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:23.751 [2024-11-17 00:58:15.673448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:23.751 [2024-11-17 00:58:15.673479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.751 [2024-11-17 00:58:15.673505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:23.751 [2024-11-17 00:58:15.673518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:23.751 [2024-11-17 00:58:15.673526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.751 [2024-11-17 00:58:15.673537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:23.751 [2024-11-17 00:58:15.673549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:23.751 [2024-11-17 00:58:15.673564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:23.751 [2024-11-17 00:58:15.673580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:23.751 [2024-11-17 00:58:15.673605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:23.751 [2024-11-17 00:58:15.673628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:23.751 [2024-11-17 00:58:15.673659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:23.751 [2024-11-17 00:58:15.673683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:23.751 [2024-11-17 00:58:15.673706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.751 [2024-11-17 00:58:15.673722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:23.751 [2024-11-17 00:58:15.673730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:23.751 [2024-11-17 00:58:15.673737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.751 [2024-11-17 00:58:15.673745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:23.751 [2024-11-17 00:58:15.673753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:23.751 [2024-11-17 00:58:15.673760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:23.751 [2024-11-17 00:58:15.673775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:23.751 [2024-11-17 00:58:15.673786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673793] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:23.751 [2024-11-17 00:58:15.673803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:23.751 [2024-11-17 00:58:15.673811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.751 [2024-11-17 00:58:15.673832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:23.751 [2024-11-17 00:58:15.673841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:23.751 [2024-11-17 00:58:15.673848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:23.751 [2024-11-17 00:58:15.673856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:23.751 [2024-11-17 00:58:15.673863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:23.751 [2024-11-17 00:58:15.673870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:23.751 [2024-11-17 00:58:15.673879] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:23.751 [2024-11-17 00:58:15.673888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.751 [2024-11-17 00:58:15.673898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:23.751 [2024-11-17 00:58:15.673906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:23.751 [2024-11-17 00:58:15.673913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:23.751 [2024-11-17 00:58:15.673922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:23.751 [2024-11-17 00:58:15.673930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:23.751 [2024-11-17 00:58:15.673937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:23.751 [2024-11-17 00:58:15.673945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:23.751 [2024-11-17 00:58:15.673952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:23.751 [2024-11-17 00:58:15.673959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:23.751 [2024-11-17 00:58:15.673966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:23.751 [2024-11-17 00:58:15.673974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:23.751 [2024-11-17 00:58:15.673981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:23.751 [2024-11-17 00:58:15.673988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:23.751 [2024-11-17 00:58:15.673996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:23.751 [2024-11-17 00:58:15.674003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:23.751 [2024-11-17 00:58:15.674011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.751 [2024-11-17 00:58:15.674019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:23.751 [2024-11-17 00:58:15.674026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:23.751 [2024-11-17 00:58:15.674034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:23.751 [2024-11-17 00:58:15.674043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:23.751 [2024-11-17 00:58:15.674051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.751 [2024-11-17 00:58:15.674059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:23.751 [2024-11-17 00:58:15.674071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:26:23.751 [2024-11-17 00:58:15.674078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.751 [2024-11-17 00:58:15.695002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.751 [2024-11-17 00:58:15.695069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:23.751 [2024-11-17 00:58:15.695088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.870 ms 00:26:23.751 [2024-11-17 00:58:15.695099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.751 [2024-11-17 00:58:15.695226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.751 [2024-11-17 00:58:15.695240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:23.752 [2024-11-17 00:58:15.695252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:23.752 [2024-11-17 00:58:15.695262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.707104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.707157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:23.752 [2024-11-17 00:58:15.707167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.763 ms 00:26:23.752 [2024-11-17 00:58:15.707175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.707214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.707224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:23.752 [2024-11-17 00:58:15.707232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:23.752 [2024-11-17 00:58:15.707247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.707768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.707812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:23.752 [2024-11-17 00:58:15.707822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:26:23.752 [2024-11-17 00:58:15.707832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.707976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.707986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:23.752 [2024-11-17 00:58:15.707996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:26:23.752 [2024-11-17 00:58:15.708004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.715013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.715066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:23.752 [2024-11-17 00:58:15.715081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.984 ms 00:26:23.752 [2024-11-17 00:58:15.715093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.718773] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:23.752 [2024-11-17 00:58:15.718823] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:23.752 [2024-11-17 00:58:15.718835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.718843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:23.752 [2024-11-17 00:58:15.718852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:26:23.752 [2024-11-17 00:58:15.718860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.734948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.735004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:23.752 [2024-11-17 00:58:15.735019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.029 ms 00:26:23.752 [2024-11-17 00:58:15.735028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.737826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.737872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:23.752 [2024-11-17 00:58:15.737882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:26:23.752 [2024-11-17 00:58:15.737889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.740533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.740577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:23.752 [2024-11-17 00:58:15.740587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:26:23.752 [2024-11-17 00:58:15.740593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.740966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.741026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:23.752 [2024-11-17 00:58:15.741037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:26:23.752 [2024-11-17 00:58:15.741045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.764119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.764188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:23.752 [2024-11-17 00:58:15.764201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.049 ms 00:26:23.752 [2024-11-17 00:58:15.764210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.772284] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:23.752 [2024-11-17 00:58:15.775653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.775693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:23.752 [2024-11-17 00:58:15.775713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.392 ms 00:26:23.752 [2024-11-17 00:58:15.775724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.775797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.775808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:23.752 [2024-11-17 00:58:15.775818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:23.752 [2024-11-17 00:58:15.775826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.776618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.776662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:23.752 [2024-11-17 00:58:15.776671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:26:23.752 [2024-11-17 00:58:15.776683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.776714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.776727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:23.752 [2024-11-17 00:58:15.776735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:23.752 [2024-11-17 00:58:15.776743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.776780] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:23.752 [2024-11-17 00:58:15.776790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.776798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:23.752 [2024-11-17 00:58:15.776806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:23.752 [2024-11-17 00:58:15.776817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.782077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.782124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:23.752 [2024-11-17 00:58:15.782135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:26:23.752 [2024-11-17 00:58:15.782143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.782227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.752 [2024-11-17 00:58:15.782237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:23.752 [2024-11-17 00:58:15.782246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:23.752 [2024-11-17 00:58:15.782255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.752 [2024-11-17 00:58:15.783337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.764 ms, result 0 00:26:25.140  [2024-11-17T00:58:18.146Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-17T00:58:19.088Z] Copying: 39/1024 [MB] (16 MBps) [2024-11-17T00:58:20.034Z] Copying: 55/1024 [MB] (16 MBps) [2024-11-17T00:58:20.980Z] Copying: 70/1024 [MB] (14 MBps) [2024-11-17T00:58:22.369Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-17T00:58:23.319Z] Copying: 92/1024 [MB] (10 MBps) [2024-11-17T00:58:24.264Z] Copying: 114/1024 [MB] (21 MBps) [2024-11-17T00:58:25.208Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-17T00:58:26.153Z] Copying: 135/1024 [MB] (10 MBps) [2024-11-17T00:58:27.096Z] Copying: 149/1024 [MB] (13 MBps) [2024-11-17T00:58:28.037Z] Copying: 161/1024 [MB] (12 MBps) [2024-11-17T00:58:28.980Z] Copying: 175/1024 [MB] (13 MBps) [2024-11-17T00:58:30.365Z] Copying: 193/1024 [MB] (18 MBps) [2024-11-17T00:58:31.310Z] Copying: 212/1024 [MB] (19 MBps) [2024-11-17T00:58:32.254Z] Copying: 231/1024 [MB] (18 MBps) [2024-11-17T00:58:33.198Z] Copying: 252/1024 [MB] (20 MBps) [2024-11-17T00:58:34.141Z] Copying: 269/1024 [MB] (16 MBps) [2024-11-17T00:58:35.082Z] Copying: 293/1024 [MB] (23 MBps) [2024-11-17T00:58:36.028Z] Copying: 311/1024 [MB] (17 MBps) [2024-11-17T00:58:36.970Z] Copying: 330/1024 [MB] (19 MBps) [2024-11-17T00:58:38.355Z] Copying: 368/1024 [MB] (37 MBps) [2024-11-17T00:58:39.296Z] Copying: 396/1024 [MB] (27 MBps) [2024-11-17T00:58:40.240Z] Copying: 419/1024 [MB] (23 MBps) [2024-11-17T00:58:41.258Z] Copying: 443/1024 [MB] (23 MBps) [2024-11-17T00:58:42.217Z] Copying: 464/1024 [MB] (20 MBps) [2024-11-17T00:58:43.160Z] Copying: 484/1024 [MB] (20 MBps) [2024-11-17T00:58:44.106Z] Copying: 504/1024 [MB] (20 MBps) [2024-11-17T00:58:45.049Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-17T00:58:45.995Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-17T00:58:47.380Z] Copying: 538/1024 [MB] (12 MBps) [2024-11-17T00:58:48.324Z] Copying: 551/1024 [MB] (13 MBps) [2024-11-17T00:58:49.268Z] Copying: 565/1024 [MB] (13 MBps) [2024-11-17T00:58:50.208Z] Copying: 584/1024 [MB] (18 MBps) [2024-11-17T00:58:51.151Z] Copying: 606/1024 [MB] (22 MBps) [2024-11-17T00:58:52.096Z] Copying: 623/1024 [MB] (17 MBps) [2024-11-17T00:58:53.042Z] Copying: 642/1024 [MB] (18 MBps) [2024-11-17T00:58:53.988Z] Copying: 661/1024 [MB] (19 MBps) [2024-11-17T00:58:55.377Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-17T00:58:56.323Z] Copying: 682/1024 [MB] (10 MBps) [2024-11-17T00:58:57.270Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-17T00:58:58.214Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-17T00:58:59.156Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-17T00:59:00.099Z] Copying: 736/1024 [MB] (21 MBps) [2024-11-17T00:59:01.042Z] Copying: 754/1024 [MB] (17 MBps) [2024-11-17T00:59:01.987Z] Copying: 777/1024 [MB] (22 MBps) [2024-11-17T00:59:03.374Z] Copying: 796/1024 [MB] (19 MBps) [2024-11-17T00:59:04.317Z] Copying: 814/1024 [MB] (18 MBps) [2024-11-17T00:59:05.255Z] Copying: 835/1024 [MB] (20 MBps) [2024-11-17T00:59:06.201Z] Copying: 862/1024 [MB] (27 MBps) [2024-11-17T00:59:07.147Z] Copying: 890/1024 [MB] (27 MBps) [2024-11-17T00:59:08.093Z] Copying: 901/1024 [MB] (10 MBps) [2024-11-17T00:59:09.040Z] Copying: 913/1024 [MB] (12 MBps) [2024-11-17T00:59:09.983Z] Copying: 927/1024 [MB] (14 MBps) [2024-11-17T00:59:11.372Z] Copying: 944/1024 [MB] (16 MBps) [2024-11-17T00:59:12.318Z] Copying: 957/1024 [MB] (13 MBps) [2024-11-17T00:59:13.316Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-17T00:59:14.260Z] Copying: 982/1024 [MB] (13 MBps) [2024-11-17T00:59:15.209Z] Copying: 993/1024 [MB] (10 MBps) [2024-11-17T00:59:16.154Z] Copying: 1003/1024 [MB] (10 MBps) [2024-11-17T00:59:17.099Z] Copying: 1014/1024 [MB] (10 MBps) [2024-11-17T00:59:17.362Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 00:59:17.139792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.139884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.299 [2024-11-17 00:59:17.139902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:25.299 [2024-11-17 00:59:17.139912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.139942] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.299 [2024-11-17 00:59:17.140736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.140775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.299 [2024-11-17 00:59:17.140788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:27:25.299 [2024-11-17 00:59:17.140797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.141062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.141074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.299 [2024-11-17 00:59:17.141085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:27:25.299 [2024-11-17 00:59:17.141095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.144585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.144614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.299 [2024-11-17 00:59:17.144623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.473 ms 00:27:25.299 [2024-11-17 00:59:17.144631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.150861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.150907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.299 [2024-11-17 00:59:17.150919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:27:25.299 [2024-11-17 00:59:17.150927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.153939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.153995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.299 [2024-11-17 00:59:17.154007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:27:25.299 [2024-11-17 00:59:17.154015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.160013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.160065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.299 [2024-11-17 00:59:17.160076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:27:25.299 [2024-11-17 00:59:17.160086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.164168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.164214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.299 [2024-11-17 00:59:17.164225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:27:25.299 [2024-11-17 00:59:17.164233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.167689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.167741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.299 [2024-11-17 00:59:17.167751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:27:25.299 [2024-11-17 00:59:17.167759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.170415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.170459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.299 [2024-11-17 00:59:17.170468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:27:25.299 [2024-11-17 00:59:17.170475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.172644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.172688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.299 [2024-11-17 00:59:17.172698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:27:25.299 [2024-11-17 00:59:17.172706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.174666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.299 [2024-11-17 00:59:17.174709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.299 [2024-11-17 00:59:17.174718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:27:25.299 [2024-11-17 00:59:17.174726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.299 [2024-11-17 00:59:17.174764] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.299 [2024-11-17 00:59:17.174793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:25.299 [2024-11-17 00:59:17.174806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:25.299 [2024-11-17 00:59:17.174815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.174996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.299 [2024-11-17 00:59:17.175082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.300 [2024-11-17 00:59:17.175620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.300 [2024-11-17 00:59:17.175634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ce98c52-c64d-4fbe-a40a-a5fd6631e08d 00:27:25.300 [2024-11-17 00:59:17.175643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:25.300 [2024-11-17 00:59:17.175651] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:25.300 [2024-11-17 00:59:17.175659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:25.300 [2024-11-17 00:59:17.175668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:25.300 [2024-11-17 00:59:17.175676] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.300 [2024-11-17 00:59:17.175685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.300 [2024-11-17 00:59:17.175692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.300 [2024-11-17 00:59:17.175699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.300 [2024-11-17 00:59:17.175706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.300 [2024-11-17 00:59:17.175714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.300 [2024-11-17 00:59:17.175722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.300 [2024-11-17 00:59:17.175749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:27:25.300 [2024-11-17 00:59:17.175758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.300 [2024-11-17 00:59:17.178038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.300 [2024-11-17 00:59:17.178075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.300 [2024-11-17 00:59:17.178086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:27:25.300 [2024-11-17 00:59:17.178105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.300 [2024-11-17 00:59:17.178234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.300 [2024-11-17 00:59:17.178245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.300 [2024-11-17 00:59:17.178253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:27:25.300 [2024-11-17 00:59:17.178261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.300 [2024-11-17 00:59:17.184813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.300 [2024-11-17 00:59:17.184861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.300 [2024-11-17 00:59:17.184884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.300 [2024-11-17 00:59:17.184893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.300 [2024-11-17 00:59:17.184952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.300 [2024-11-17 00:59:17.184961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.300 [2024-11-17 00:59:17.184970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.300 [2024-11-17 00:59:17.184979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.300 [2024-11-17 00:59:17.185044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.185056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.301 [2024-11-17 00:59:17.185064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.185071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.185086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.185097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.301 [2024-11-17 00:59:17.185105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.185113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.198547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.198601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.301 [2024-11-17 00:59:17.198611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.198620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.209655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.209711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.301 [2024-11-17 00:59:17.209723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.209733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.209796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.209811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.301 [2024-11-17 00:59:17.209819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.209828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.209865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.209875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.301 [2024-11-17 00:59:17.209887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.209896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.209964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.209974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.301 [2024-11-17 00:59:17.209982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.209990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.210020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.210030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.301 [2024-11-17 00:59:17.210038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.210049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.210089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.210100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.301 [2024-11-17 00:59:17.210108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.210117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.210166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.301 [2024-11-17 00:59:17.210187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.301 [2024-11-17 00:59:17.210199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.301 [2024-11-17 00:59:17.210208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.301 [2024-11-17 00:59:17.210349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.523 ms, result 0 00:27:25.562 00:27:25.562 00:27:25.562 00:59:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:28.109 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:28.109 Process with pid 89693 is not found 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89693 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89693 ']' 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89693 00:27:28.109 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89693) - No such process 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89693 is not found' 00:27:28.109 00:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:28.370 Remove shared memory files 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:28.370 00:27:28.370 real 4m23.131s 00:27:28.370 user 5m0.461s 00:27:28.370 sys 0m30.002s 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:28.370 00:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:28.370 ************************************ 00:27:28.370 END TEST ftl_dirty_shutdown 00:27:28.370 ************************************ 00:27:28.370 00:59:20 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:28.370 00:59:20 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:28.370 00:59:20 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:28.370 00:59:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:28.370 ************************************ 00:27:28.370 START TEST ftl_upgrade_shutdown 00:27:28.370 ************************************ 00:27:28.370 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:28.370 * Looking for test storage... 00:27:28.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.370 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:28.370 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:28.370 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.632 --rc genhtml_branch_coverage=1 00:27:28.632 --rc genhtml_function_coverage=1 00:27:28.632 --rc genhtml_legend=1 00:27:28.632 --rc geninfo_all_blocks=1 00:27:28.632 --rc geninfo_unexecuted_blocks=1 00:27:28.632 00:27:28.632 ' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.632 --rc genhtml_branch_coverage=1 00:27:28.632 --rc genhtml_function_coverage=1 00:27:28.632 --rc genhtml_legend=1 00:27:28.632 --rc geninfo_all_blocks=1 00:27:28.632 --rc geninfo_unexecuted_blocks=1 00:27:28.632 00:27:28.632 ' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.632 --rc genhtml_branch_coverage=1 00:27:28.632 --rc genhtml_function_coverage=1 00:27:28.632 --rc genhtml_legend=1 00:27:28.632 --rc geninfo_all_blocks=1 00:27:28.632 --rc geninfo_unexecuted_blocks=1 00:27:28.632 00:27:28.632 ' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.632 --rc genhtml_branch_coverage=1 00:27:28.632 --rc genhtml_function_coverage=1 00:27:28.632 --rc genhtml_legend=1 00:27:28.632 --rc geninfo_all_blocks=1 00:27:28.632 --rc geninfo_unexecuted_blocks=1 00:27:28.632 00:27:28.632 ' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:28.632 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92520 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92520 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92520 ']' 00:27:28.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.633 00:59:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:28.633 [2024-11-17 00:59:20.587686] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:28.633 [2024-11-17 00:59:20.587854] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92520 ] 00:27:28.894 [2024-11-17 00:59:20.742514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.894 [2024-11-17 00:59:20.792666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:29.467 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:29.729 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:29.990 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:29.990 { 00:27:29.990 "name": "basen1", 00:27:29.990 "aliases": [ 00:27:29.990 "a3d8bcff-54e1-4684-98e8-c29238c9c33c" 00:27:29.990 ], 00:27:29.990 "product_name": "NVMe disk", 00:27:29.990 "block_size": 4096, 00:27:29.990 "num_blocks": 1310720, 00:27:29.990 "uuid": "a3d8bcff-54e1-4684-98e8-c29238c9c33c", 00:27:29.990 "numa_id": -1, 00:27:29.990 "assigned_rate_limits": { 00:27:29.990 "rw_ios_per_sec": 0, 00:27:29.990 "rw_mbytes_per_sec": 0, 00:27:29.990 "r_mbytes_per_sec": 0, 00:27:29.990 "w_mbytes_per_sec": 0 00:27:29.990 }, 00:27:29.990 "claimed": true, 00:27:29.991 "claim_type": "read_many_write_one", 00:27:29.991 "zoned": false, 00:27:29.991 "supported_io_types": { 00:27:29.991 "read": true, 00:27:29.991 "write": true, 00:27:29.991 "unmap": true, 00:27:29.991 "flush": true, 00:27:29.991 "reset": true, 00:27:29.991 "nvme_admin": true, 00:27:29.991 "nvme_io": true, 00:27:29.991 "nvme_io_md": false, 00:27:29.991 "write_zeroes": true, 00:27:29.991 "zcopy": false, 00:27:29.991 "get_zone_info": false, 00:27:29.991 "zone_management": false, 00:27:29.991 "zone_append": false, 00:27:29.991 "compare": true, 00:27:29.991 "compare_and_write": false, 00:27:29.991 "abort": true, 00:27:29.991 "seek_hole": false, 00:27:29.991 "seek_data": false, 00:27:29.991 "copy": true, 00:27:29.991 "nvme_iov_md": false 00:27:29.991 }, 00:27:29.991 "driver_specific": { 00:27:29.991 "nvme": [ 00:27:29.991 { 00:27:29.991 "pci_address": "0000:00:11.0", 00:27:29.991 "trid": { 00:27:29.991 "trtype": "PCIe", 00:27:29.991 "traddr": "0000:00:11.0" 00:27:29.991 }, 00:27:29.991 "ctrlr_data": { 00:27:29.991 "cntlid": 0, 00:27:29.991 "vendor_id": "0x1b36", 00:27:29.991 "model_number": "QEMU NVMe Ctrl", 00:27:29.991 "serial_number": "12341", 00:27:29.991 "firmware_revision": "8.0.0", 00:27:29.991 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:29.991 "oacs": { 00:27:29.991 "security": 0, 00:27:29.991 "format": 1, 00:27:29.991 "firmware": 0, 00:27:29.991 "ns_manage": 1 00:27:29.991 }, 00:27:29.991 "multi_ctrlr": false, 00:27:29.991 "ana_reporting": false 00:27:29.991 }, 00:27:29.991 "vs": { 00:27:29.991 "nvme_version": "1.4" 00:27:29.991 }, 00:27:29.991 "ns_data": { 00:27:29.991 "id": 1, 00:27:29.991 "can_share": false 00:27:29.991 } 00:27:29.991 } 00:27:29.991 ], 00:27:29.991 "mp_policy": "active_passive" 00:27:29.991 } 00:27:29.991 } 00:27:29.991 ]' 00:27:29.991 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:29.991 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:29.991 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:29.991 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:29.991 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:29.991 00:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:29.991 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:29.991 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:29.991 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:29.991 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:29.991 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:30.252 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=64aad473-7af9-4910-9850-8a822aee56a5 00:27:30.252 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:30.252 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64aad473-7af9-4910-9850-8a822aee56a5 00:27:30.513 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:30.774 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ecae4484-7441-4719-83ff-3795ecfb10e9 00:27:30.774 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ecae4484-7441-4719-83ff-3795ecfb10e9 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 ]] 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 5120 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:31.035 00:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:31.296 { 00:27:31.296 "name": "fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3", 00:27:31.296 "aliases": [ 00:27:31.296 "lvs/basen1p0" 00:27:31.296 ], 00:27:31.296 "product_name": "Logical Volume", 00:27:31.296 "block_size": 4096, 00:27:31.296 "num_blocks": 5242880, 00:27:31.296 "uuid": "fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3", 00:27:31.296 "assigned_rate_limits": { 00:27:31.296 "rw_ios_per_sec": 0, 00:27:31.296 "rw_mbytes_per_sec": 0, 00:27:31.296 "r_mbytes_per_sec": 0, 00:27:31.296 "w_mbytes_per_sec": 0 00:27:31.296 }, 00:27:31.296 "claimed": false, 00:27:31.296 "zoned": false, 00:27:31.296 "supported_io_types": { 00:27:31.296 "read": true, 00:27:31.296 "write": true, 00:27:31.296 "unmap": true, 00:27:31.296 "flush": false, 00:27:31.296 "reset": true, 00:27:31.296 "nvme_admin": false, 00:27:31.296 "nvme_io": false, 00:27:31.296 "nvme_io_md": false, 00:27:31.296 "write_zeroes": true, 00:27:31.296 "zcopy": false, 00:27:31.296 "get_zone_info": false, 00:27:31.296 "zone_management": false, 00:27:31.296 "zone_append": false, 00:27:31.296 "compare": false, 00:27:31.296 "compare_and_write": false, 00:27:31.296 "abort": false, 00:27:31.296 "seek_hole": true, 00:27:31.296 "seek_data": true, 00:27:31.296 "copy": false, 00:27:31.296 "nvme_iov_md": false 00:27:31.296 }, 00:27:31.296 "driver_specific": { 00:27:31.296 "lvol": { 00:27:31.296 "lvol_store_uuid": "ecae4484-7441-4719-83ff-3795ecfb10e9", 00:27:31.296 "base_bdev": "basen1", 00:27:31.296 "thin_provision": true, 00:27:31.296 "num_allocated_clusters": 0, 00:27:31.296 "snapshot": false, 00:27:31.296 "clone": false, 00:27:31.296 "esnap_clone": false 00:27:31.296 } 00:27:31.296 } 00:27:31.296 } 00:27:31.296 ]' 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:31.296 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:31.557 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:31.557 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:31.557 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:31.820 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:31.820 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:31.820 00:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d fcace604-3ffb-4dc6-bf01-c4d1a59eb3c3 -c cachen1p0 --l2p_dram_limit 2 00:27:31.820 [2024-11-17 00:59:23.845927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.845992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:31.820 [2024-11-17 00:59:23.846007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:31.820 [2024-11-17 00:59:23.846019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.846078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.846091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:31.820 [2024-11-17 00:59:23.846100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:31.820 [2024-11-17 00:59:23.846118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.846142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:31.820 [2024-11-17 00:59:23.846443] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:31.820 [2024-11-17 00:59:23.846463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.846476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:31.820 [2024-11-17 00:59:23.846488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:27:31.820 [2024-11-17 00:59:23.846500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.846535] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 8fc3b2c6-088e-4fda-b760-860cf2b58f6c 00:27:31.820 [2024-11-17 00:59:23.848309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.848341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:31.820 [2024-11-17 00:59:23.848376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:31.820 [2024-11-17 00:59:23.848386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.857575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.857623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:31.820 [2024-11-17 00:59:23.857636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.085 ms 00:27:31.820 [2024-11-17 00:59:23.857650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.857704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.857714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:31.820 [2024-11-17 00:59:23.857728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:31.820 [2024-11-17 00:59:23.857737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.857795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.857806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:31.820 [2024-11-17 00:59:23.857818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:31.820 [2024-11-17 00:59:23.857826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.857857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:31.820 [2024-11-17 00:59:23.860182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.860226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:31.820 [2024-11-17 00:59:23.860239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.334 ms 00:27:31.820 [2024-11-17 00:59:23.860251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.860289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.860306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:31.820 [2024-11-17 00:59:23.860316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:31.820 [2024-11-17 00:59:23.860330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.860377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:31.820 [2024-11-17 00:59:23.860536] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:31.820 [2024-11-17 00:59:23.860552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:31.820 [2024-11-17 00:59:23.860569] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:31.820 [2024-11-17 00:59:23.860582] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:31.820 [2024-11-17 00:59:23.860605] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:31.820 [2024-11-17 00:59:23.860620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:31.820 [2024-11-17 00:59:23.860631] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:31.820 [2024-11-17 00:59:23.860639] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:31.820 [2024-11-17 00:59:23.860653] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:31.820 [2024-11-17 00:59:23.860661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.860673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:31.820 [2024-11-17 00:59:23.860684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:27:31.820 [2024-11-17 00:59:23.860697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.820 [2024-11-17 00:59:23.860783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.820 [2024-11-17 00:59:23.860799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:31.820 [2024-11-17 00:59:23.860807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:31.821 [2024-11-17 00:59:23.860817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.821 [2024-11-17 00:59:23.860946] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:31.821 [2024-11-17 00:59:23.860960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:31.821 [2024-11-17 00:59:23.860969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:31.821 [2024-11-17 00:59:23.860982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.860991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:31.821 [2024-11-17 00:59:23.861000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:31.821 [2024-11-17 00:59:23.861025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:31.821 [2024-11-17 00:59:23.861034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:31.821 [2024-11-17 00:59:23.861043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:31.821 [2024-11-17 00:59:23.861063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:31.821 [2024-11-17 00:59:23.861072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:31.821 [2024-11-17 00:59:23.861093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:31.821 [2024-11-17 00:59:23.861103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:31.821 [2024-11-17 00:59:23.861119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:31.821 [2024-11-17 00:59:23.861126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:31.821 [2024-11-17 00:59:23.861144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:31.821 [2024-11-17 00:59:23.861154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.821 [2024-11-17 00:59:23.861162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:31.821 [2024-11-17 00:59:23.861170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:31.821 [2024-11-17 00:59:23.861180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.821 [2024-11-17 00:59:23.861189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:31.821 [2024-11-17 00:59:23.861197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:31.821 [2024-11-17 00:59:23.861205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.821 [2024-11-17 00:59:23.861212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:31.821 [2024-11-17 00:59:23.861225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:31.821 [2024-11-17 00:59:23.861233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.821 [2024-11-17 00:59:23.861242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:31.821 [2024-11-17 00:59:23.861250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:31.821 [2024-11-17 00:59:23.861262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:31.821 [2024-11-17 00:59:23.861279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:31.821 [2024-11-17 00:59:23.861285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:31.821 [2024-11-17 00:59:23.861302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:31.821 [2024-11-17 00:59:23.861327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:31.821 [2024-11-17 00:59:23.861333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861342] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:31.821 [2024-11-17 00:59:23.861367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:31.821 [2024-11-17 00:59:23.861380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:31.821 [2024-11-17 00:59:23.861387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.821 [2024-11-17 00:59:23.861402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:31.821 [2024-11-17 00:59:23.861409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:31.821 [2024-11-17 00:59:23.861418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:31.821 [2024-11-17 00:59:23.861426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:31.821 [2024-11-17 00:59:23.861434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:31.821 [2024-11-17 00:59:23.861441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:31.821 [2024-11-17 00:59:23.861454] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:31.821 [2024-11-17 00:59:23.861465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:31.821 [2024-11-17 00:59:23.861491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:31.821 [2024-11-17 00:59:23.861519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:31.821 [2024-11-17 00:59:23.861527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:31.821 [2024-11-17 00:59:23.861538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:31.821 [2024-11-17 00:59:23.861546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:31.821 [2024-11-17 00:59:23.861608] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:31.821 [2024-11-17 00:59:23.861616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:31.821 [2024-11-17 00:59:23.861635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:31.821 [2024-11-17 00:59:23.861645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:31.821 [2024-11-17 00:59:23.861655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:31.821 [2024-11-17 00:59:23.861665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.821 [2024-11-17 00:59:23.861673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:31.821 [2024-11-17 00:59:23.861687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.812 ms 00:27:31.821 [2024-11-17 00:59:23.861694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.821 [2024-11-17 00:59:23.861760] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:31.821 [2024-11-17 00:59:23.861773] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:36.025 [2024-11-17 00:59:27.321854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.321932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:36.025 [2024-11-17 00:59:27.321956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3460.072 ms 00:27:36.025 [2024-11-17 00:59:27.321965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.335303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.335371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:36.025 [2024-11-17 00:59:27.335388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.213 ms 00:27:36.025 [2024-11-17 00:59:27.335397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.335467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.335482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:36.025 [2024-11-17 00:59:27.335493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:36.025 [2024-11-17 00:59:27.335503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.347191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.347239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:36.025 [2024-11-17 00:59:27.347263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.642 ms 00:27:36.025 [2024-11-17 00:59:27.347273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.347312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.347321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:36.025 [2024-11-17 00:59:27.347333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:36.025 [2024-11-17 00:59:27.347341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.347890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.347929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:36.025 [2024-11-17 00:59:27.347944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.479 ms 00:27:36.025 [2024-11-17 00:59:27.347961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.348012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.348027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:36.025 [2024-11-17 00:59:27.348046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:36.025 [2024-11-17 00:59:27.348056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.372639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.372725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:36.025 [2024-11-17 00:59:27.372762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.546 ms 00:27:36.025 [2024-11-17 00:59:27.372785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.382950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:36.025 [2024-11-17 00:59:27.384208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.384254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:36.025 [2024-11-17 00:59:27.384266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.148 ms 00:27:36.025 [2024-11-17 00:59:27.384277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.402501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.402554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:36.025 [2024-11-17 00:59:27.402566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.194 ms 00:27:36.025 [2024-11-17 00:59:27.402586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.402689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.402703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:36.025 [2024-11-17 00:59:27.402712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:36.025 [2024-11-17 00:59:27.402724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.407606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.407655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:36.025 [2024-11-17 00:59:27.407673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.862 ms 00:27:36.025 [2024-11-17 00:59:27.407684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.412541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.412590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:36.025 [2024-11-17 00:59:27.412600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.805 ms 00:27:36.025 [2024-11-17 00:59:27.412611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.412969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.412985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:36.025 [2024-11-17 00:59:27.412996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 00:27:36.025 [2024-11-17 00:59:27.413010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.452683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.452743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:36.025 [2024-11-17 00:59:27.452756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.649 ms 00:27:36.025 [2024-11-17 00:59:27.452771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.459541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.459593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:36.025 [2024-11-17 00:59:27.459606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.692 ms 00:27:36.025 [2024-11-17 00:59:27.459617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.465343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.465407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:36.025 [2024-11-17 00:59:27.465417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.678 ms 00:27:36.025 [2024-11-17 00:59:27.465428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.025 [2024-11-17 00:59:27.471497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.025 [2024-11-17 00:59:27.471546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:36.025 [2024-11-17 00:59:27.471557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.021 ms 00:27:36.026 [2024-11-17 00:59:27.471570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.026 [2024-11-17 00:59:27.471622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.026 [2024-11-17 00:59:27.471635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:36.026 [2024-11-17 00:59:27.471650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:36.026 [2024-11-17 00:59:27.471661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.026 [2024-11-17 00:59:27.471733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.026 [2024-11-17 00:59:27.471746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:36.026 [2024-11-17 00:59:27.471756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:36.026 [2024-11-17 00:59:27.471766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.026 [2024-11-17 00:59:27.472912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3626.520 ms, result 0 00:27:36.026 { 00:27:36.026 "name": "ftl", 00:27:36.026 "uuid": "8fc3b2c6-088e-4fda-b760-860cf2b58f6c" 00:27:36.026 } 00:27:36.026 00:59:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:36.026 [2024-11-17 00:59:27.690742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.026 00:59:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:36.026 00:59:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:36.287 [2024-11-17 00:59:28.127162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:36.287 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:36.287 [2024-11-17 00:59:28.335631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:36.546 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:36.805 Fill FTL, iteration 1 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92644 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92644 /var/tmp/spdk.tgt.sock 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92644 ']' 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:36.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:36.805 00:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:36.805 [2024-11-17 00:59:28.774077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:36.805 [2024-11-17 00:59:28.774205] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92644 ] 00:27:37.063 [2024-11-17 00:59:28.922410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.063 [2024-11-17 00:59:28.964860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.630 00:59:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.630 00:59:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:37.630 00:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:37.888 ftln1 00:27:37.888 00:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:37.888 00:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92644 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92644 ']' 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92644 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92644 00:27:38.147 killing process with pid 92644 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92644' 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92644 00:27:38.147 00:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92644 00:27:38.405 00:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:38.405 00:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:38.664 [2024-11-17 00:59:30.499847] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:38.664 [2024-11-17 00:59:30.499959] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92677 ] 00:27:38.664 [2024-11-17 00:59:30.646806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.664 [2024-11-17 00:59:30.689479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.047  [2024-11-17T00:59:33.054Z] Copying: 177/1024 [MB] (177 MBps) [2024-11-17T00:59:33.997Z] Copying: 390/1024 [MB] (213 MBps) [2024-11-17T00:59:34.937Z] Copying: 633/1024 [MB] (243 MBps) [2024-11-17T00:59:35.880Z] Copying: 876/1024 [MB] (243 MBps) [2024-11-17T00:59:35.880Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:27:43.817 00:27:43.817 Calculate MD5 checksum, iteration 1 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:43.817 00:59:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:43.817 [2024-11-17 00:59:35.820259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:43.817 [2024-11-17 00:59:35.820429] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92735 ] 00:27:44.078 [2024-11-17 00:59:35.979934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.078 [2024-11-17 00:59:36.020317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.463  [2024-11-17T00:59:38.097Z] Copying: 625/1024 [MB] (625 MBps) [2024-11-17T00:59:38.097Z] Copying: 1024/1024 [MB] (average 622 MBps) 00:27:46.034 00:27:46.034 00:59:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:46.034 00:59:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:48.576 Fill FTL, iteration 2 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6670e8afe43c615cee048dc7f7974aca 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:48.576 00:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:48.576 [2024-11-17 00:59:40.345293] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:48.576 [2024-11-17 00:59:40.345697] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92784 ] 00:27:48.576 [2024-11-17 00:59:40.495081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.576 [2024-11-17 00:59:40.555812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.034  [2024-11-17T00:59:43.041Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-17T00:59:43.986Z] Copying: 466/1024 [MB] (226 MBps) [2024-11-17T00:59:44.930Z] Copying: 704/1024 [MB] (238 MBps) [2024-11-17T00:59:45.191Z] Copying: 938/1024 [MB] (234 MBps) [2024-11-17T00:59:45.454Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:27:53.391 00:27:53.391 Calculate MD5 checksum, iteration 2 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:53.391 00:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:53.391 [2024-11-17 00:59:45.418468] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:53.391 [2024-11-17 00:59:45.418583] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92838 ] 00:27:53.652 [2024-11-17 00:59:45.566903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.652 [2024-11-17 00:59:45.626172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.039  [2024-11-17T00:59:47.676Z] Copying: 598/1024 [MB] (598 MBps) [2024-11-17T00:59:50.969Z] Copying: 1024/1024 [MB] (average 610 MBps) 00:27:58.906 00:27:58.906 00:59:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:58.906 00:59:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:01.436 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:01.436 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8820013dc5a45d8abe6f3b706ba7b72e 00:28:01.436 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:01.436 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:01.436 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:01.436 [2024-11-17 00:59:53.248746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.436 [2024-11-17 00:59:53.248783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:01.436 [2024-11-17 00:59:53.248794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:01.436 [2024-11-17 00:59:53.248801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.436 [2024-11-17 00:59:53.248818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.436 [2024-11-17 00:59:53.248827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:01.436 [2024-11-17 00:59:53.248834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:01.436 [2024-11-17 00:59:53.248840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.436 [2024-11-17 00:59:53.248855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.436 [2024-11-17 00:59:53.248862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:01.436 [2024-11-17 00:59:53.248868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:01.436 [2024-11-17 00:59:53.248873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.436 [2024-11-17 00:59:53.248942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.183 ms, result 0 00:28:01.436 true 00:28:01.436 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:01.436 { 00:28:01.436 "name": "ftl", 00:28:01.436 "properties": [ 00:28:01.436 { 00:28:01.436 "name": "superblock_version", 00:28:01.436 "value": 5, 00:28:01.436 "read-only": true 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "name": "base_device", 00:28:01.436 "bands": [ 00:28:01.436 { 00:28:01.436 "id": 0, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 1, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 2, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 3, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 4, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 5, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 6, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 7, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 8, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 9, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 10, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 11, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 12, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 13, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 14, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 15, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 16, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 17, 00:28:01.436 "state": "FREE", 00:28:01.436 "validity": 0.0 00:28:01.436 } 00:28:01.436 ], 00:28:01.436 "read-only": true 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "name": "cache_device", 00:28:01.436 "type": "bdev", 00:28:01.436 "chunks": [ 00:28:01.436 { 00:28:01.436 "id": 0, 00:28:01.436 "state": "INACTIVE", 00:28:01.436 "utilization": 0.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 1, 00:28:01.436 "state": "CLOSED", 00:28:01.436 "utilization": 1.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 2, 00:28:01.436 "state": "CLOSED", 00:28:01.436 "utilization": 1.0 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 3, 00:28:01.436 "state": "OPEN", 00:28:01.436 "utilization": 0.001953125 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "id": 4, 00:28:01.436 "state": "OPEN", 00:28:01.436 "utilization": 0.0 00:28:01.436 } 00:28:01.436 ], 00:28:01.436 "read-only": true 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "name": "verbose_mode", 00:28:01.436 "value": true, 00:28:01.436 "unit": "", 00:28:01.436 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:01.436 }, 00:28:01.436 { 00:28:01.436 "name": "prep_upgrade_on_shutdown", 00:28:01.436 "value": false, 00:28:01.436 "unit": "", 00:28:01.437 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:01.437 } 00:28:01.437 ] 00:28:01.437 } 00:28:01.437 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:01.695 [2024-11-17 00:59:53.657100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.695 [2024-11-17 00:59:53.657132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:01.695 [2024-11-17 00:59:53.657141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:01.695 [2024-11-17 00:59:53.657146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.695 [2024-11-17 00:59:53.657175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.695 [2024-11-17 00:59:53.657182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:01.695 [2024-11-17 00:59:53.657188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:01.695 [2024-11-17 00:59:53.657193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.695 [2024-11-17 00:59:53.657208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.695 [2024-11-17 00:59:53.657214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:01.695 [2024-11-17 00:59:53.657221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:01.695 [2024-11-17 00:59:53.657226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.695 [2024-11-17 00:59:53.657268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.158 ms, result 0 00:28:01.695 true 00:28:01.695 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:01.695 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:01.695 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:01.953 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:01.953 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:01.953 00:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:02.211 [2024-11-17 00:59:54.053451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.211 [2024-11-17 00:59:54.053476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:02.211 [2024-11-17 00:59:54.053483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:02.211 [2024-11-17 00:59:54.053489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.211 [2024-11-17 00:59:54.053504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.211 [2024-11-17 00:59:54.053510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:02.211 [2024-11-17 00:59:54.053515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:02.211 [2024-11-17 00:59:54.053521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.211 [2024-11-17 00:59:54.053535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.211 [2024-11-17 00:59:54.053541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:02.211 [2024-11-17 00:59:54.053546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:02.211 [2024-11-17 00:59:54.053551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.211 [2024-11-17 00:59:54.053590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.129 ms, result 0 00:28:02.211 true 00:28:02.211 00:59:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:02.211 { 00:28:02.211 "name": "ftl", 00:28:02.211 "properties": [ 00:28:02.211 { 00:28:02.211 "name": "superblock_version", 00:28:02.211 "value": 5, 00:28:02.211 "read-only": true 00:28:02.211 }, 00:28:02.211 { 00:28:02.211 "name": "base_device", 00:28:02.211 "bands": [ 00:28:02.211 { 00:28:02.211 "id": 0, 00:28:02.211 "state": "FREE", 00:28:02.211 "validity": 0.0 00:28:02.211 }, 00:28:02.211 { 00:28:02.211 "id": 1, 00:28:02.211 "state": "FREE", 00:28:02.211 "validity": 0.0 00:28:02.211 }, 00:28:02.211 { 00:28:02.211 "id": 2, 00:28:02.211 "state": "FREE", 00:28:02.211 "validity": 0.0 00:28:02.211 }, 00:28:02.211 { 00:28:02.211 "id": 3, 00:28:02.211 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 4, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 5, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 6, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 7, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 8, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 9, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 10, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 11, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 12, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 13, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 14, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 15, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 16, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 17, 00:28:02.212 "state": "FREE", 00:28:02.212 "validity": 0.0 00:28:02.212 } 00:28:02.212 ], 00:28:02.212 "read-only": true 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "name": "cache_device", 00:28:02.212 "type": "bdev", 00:28:02.212 "chunks": [ 00:28:02.212 { 00:28:02.212 "id": 0, 00:28:02.212 "state": "INACTIVE", 00:28:02.212 "utilization": 0.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 1, 00:28:02.212 "state": "CLOSED", 00:28:02.212 "utilization": 1.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 2, 00:28:02.212 "state": "CLOSED", 00:28:02.212 "utilization": 1.0 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 3, 00:28:02.212 "state": "OPEN", 00:28:02.212 "utilization": 0.001953125 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "id": 4, 00:28:02.212 "state": "OPEN", 00:28:02.212 "utilization": 0.0 00:28:02.212 } 00:28:02.212 ], 00:28:02.212 "read-only": true 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "name": "verbose_mode", 00:28:02.212 "value": true, 00:28:02.212 "unit": "", 00:28:02.212 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:02.212 }, 00:28:02.212 { 00:28:02.212 "name": "prep_upgrade_on_shutdown", 00:28:02.212 "value": true, 00:28:02.212 "unit": "", 00:28:02.212 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:02.212 } 00:28:02.212 ] 00:28:02.212 } 00:28:02.212 00:59:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:02.212 00:59:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92520 ]] 00:28:02.212 00:59:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92520 00:28:02.212 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92520 ']' 00:28:02.212 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92520 00:28:02.212 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:02.472 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.472 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92520 00:28:02.472 killing process with pid 92520 00:28:02.472 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:02.472 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:02.472 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92520' 00:28:02.472 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92520 00:28:02.472 00:59:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92520 00:28:02.472 [2024-11-17 00:59:54.375646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:02.472 [2024-11-17 00:59:54.378667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.472 [2024-11-17 00:59:54.378694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:02.472 [2024-11-17 00:59:54.378704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:02.472 [2024-11-17 00:59:54.378710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.472 [2024-11-17 00:59:54.378728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:02.472 [2024-11-17 00:59:54.379115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.472 [2024-11-17 00:59:54.379135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:02.472 [2024-11-17 00:59:54.379142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.377 ms 00:28:02.472 [2024-11-17 00:59:54.379151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.770517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.770571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:10.604 [2024-11-17 01:00:01.770587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7391.317 ms 00:28:10.604 [2024-11-17 01:00:01.770595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.771834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.771859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:10.604 [2024-11-17 01:00:01.771868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.226 ms 00:28:10.604 [2024-11-17 01:00:01.771874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.772738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.772757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:10.604 [2024-11-17 01:00:01.772770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:28:10.604 [2024-11-17 01:00:01.772777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.775006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.775035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:10.604 [2024-11-17 01:00:01.775043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.202 ms 00:28:10.604 [2024-11-17 01:00:01.775049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.777382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.777410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:10.604 [2024-11-17 01:00:01.777424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.307 ms 00:28:10.604 [2024-11-17 01:00:01.777431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.777487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.777499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:10.604 [2024-11-17 01:00:01.777506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:10.604 [2024-11-17 01:00:01.777511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.779606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.779634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:10.604 [2024-11-17 01:00:01.779642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.078 ms 00:28:10.604 [2024-11-17 01:00:01.779647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.781401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.781428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:10.604 [2024-11-17 01:00:01.781435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.722 ms 00:28:10.604 [2024-11-17 01:00:01.781441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.783008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.783035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:10.604 [2024-11-17 01:00:01.783042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.542 ms 00:28:10.604 [2024-11-17 01:00:01.783047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.784699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.784725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:10.604 [2024-11-17 01:00:01.784732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.606 ms 00:28:10.604 [2024-11-17 01:00:01.784737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.604 [2024-11-17 01:00:01.784761] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:10.604 [2024-11-17 01:00:01.784771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:10.604 [2024-11-17 01:00:01.784780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:10.604 [2024-11-17 01:00:01.784786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:10.604 [2024-11-17 01:00:01.784793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.604 [2024-11-17 01:00:01.784881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:10.604 [2024-11-17 01:00:01.784900] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8fc3b2c6-088e-4fda-b760-860cf2b58f6c 00:28:10.604 [2024-11-17 01:00:01.784907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:10.604 [2024-11-17 01:00:01.784912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:10.604 [2024-11-17 01:00:01.784919] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:10.604 [2024-11-17 01:00:01.784925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:10.604 [2024-11-17 01:00:01.784934] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:10.604 [2024-11-17 01:00:01.784944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:10.604 [2024-11-17 01:00:01.784950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:10.604 [2024-11-17 01:00:01.784955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:10.604 [2024-11-17 01:00:01.784961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:10.604 [2024-11-17 01:00:01.784966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.604 [2024-11-17 01:00:01.784973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:10.604 [2024-11-17 01:00:01.784979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:28:10.604 [2024-11-17 01:00:01.784988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.786232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.605 [2024-11-17 01:00:01.786257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:10.605 [2024-11-17 01:00:01.786269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.232 ms 00:28:10.605 [2024-11-17 01:00:01.786275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.786344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.605 [2024-11-17 01:00:01.786365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:10.605 [2024-11-17 01:00:01.786372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:28:10.605 [2024-11-17 01:00:01.786378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.790757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.790788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:10.605 [2024-11-17 01:00:01.790795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.790801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.790822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.790828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:10.605 [2024-11-17 01:00:01.790834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.790839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.790885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.790893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:10.605 [2024-11-17 01:00:01.790903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.790909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.790920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.790927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:10.605 [2024-11-17 01:00:01.790933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.790941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.798585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.798619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:10.605 [2024-11-17 01:00:01.798632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.798638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.805150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:10.605 [2024-11-17 01:00:01.805158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.805165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.805207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:10.605 [2024-11-17 01:00:01.805213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.805224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.805268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:10.605 [2024-11-17 01:00:01.805275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.805281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.805345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:10.605 [2024-11-17 01:00:01.805363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.805370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.805403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:10.605 [2024-11-17 01:00:01.805410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.805415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.805451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:10.605 [2024-11-17 01:00:01.805460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.805466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:10.605 [2024-11-17 01:00:01.805509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:10.605 [2024-11-17 01:00:01.805516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:10.605 [2024-11-17 01:00:01.805521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.605 [2024-11-17 01:00:01.805611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7426.899 ms, result 0 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93038 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93038 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93038 ']' 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.810 01:00:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:14.810 [2024-11-17 01:00:06.266591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:14.810 [2024-11-17 01:00:06.266706] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93038 ] 00:28:14.810 [2024-11-17 01:00:06.406938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.810 [2024-11-17 01:00:06.437333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.810 [2024-11-17 01:00:06.689746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:14.810 [2024-11-17 01:00:06.689799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:14.810 [2024-11-17 01:00:06.827416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.827451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:14.810 [2024-11-17 01:00:06.827460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:14.810 [2024-11-17 01:00:06.827468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.810 [2024-11-17 01:00:06.827510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.827517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:14.810 [2024-11-17 01:00:06.827523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:14.810 [2024-11-17 01:00:06.827529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.810 [2024-11-17 01:00:06.827549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:14.810 [2024-11-17 01:00:06.827720] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:14.810 [2024-11-17 01:00:06.827731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.827736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:14.810 [2024-11-17 01:00:06.827748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:28:14.810 [2024-11-17 01:00:06.827753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.810 [2024-11-17 01:00:06.828687] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:14.810 [2024-11-17 01:00:06.830639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.830669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:14.810 [2024-11-17 01:00:06.830676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.954 ms 00:28:14.810 [2024-11-17 01:00:06.830686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.810 [2024-11-17 01:00:06.830729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.830737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:14.810 [2024-11-17 01:00:06.830744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:14.810 [2024-11-17 01:00:06.830750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.810 [2024-11-17 01:00:06.835141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.835170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:14.810 [2024-11-17 01:00:06.835176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.358 ms 00:28:14.810 [2024-11-17 01:00:06.835182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.810 [2024-11-17 01:00:06.835215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.835223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:14.810 [2024-11-17 01:00:06.835229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:14.810 [2024-11-17 01:00:06.835234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.810 [2024-11-17 01:00:06.835270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.810 [2024-11-17 01:00:06.835278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:14.811 [2024-11-17 01:00:06.835286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:14.811 [2024-11-17 01:00:06.835291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.811 [2024-11-17 01:00:06.835306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:14.811 [2024-11-17 01:00:06.836458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.811 [2024-11-17 01:00:06.836481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:14.811 [2024-11-17 01:00:06.836489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.155 ms 00:28:14.811 [2024-11-17 01:00:06.836494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.811 [2024-11-17 01:00:06.836515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.811 [2024-11-17 01:00:06.836522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:14.811 [2024-11-17 01:00:06.836530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:14.811 [2024-11-17 01:00:06.836536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.811 [2024-11-17 01:00:06.836550] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:14.811 [2024-11-17 01:00:06.836565] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:14.811 [2024-11-17 01:00:06.836591] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:14.811 [2024-11-17 01:00:06.836603] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:14.811 [2024-11-17 01:00:06.836681] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:14.811 [2024-11-17 01:00:06.836690] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:14.811 [2024-11-17 01:00:06.836698] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:14.811 [2024-11-17 01:00:06.836707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:14.811 [2024-11-17 01:00:06.836714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:14.811 [2024-11-17 01:00:06.836720] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:14.811 [2024-11-17 01:00:06.836726] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:14.811 [2024-11-17 01:00:06.836731] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:14.811 [2024-11-17 01:00:06.836737] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:14.811 [2024-11-17 01:00:06.836742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.811 [2024-11-17 01:00:06.836748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:14.811 [2024-11-17 01:00:06.836754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:28:14.811 [2024-11-17 01:00:06.836761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.811 [2024-11-17 01:00:06.836825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.811 [2024-11-17 01:00:06.836835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:14.811 [2024-11-17 01:00:06.836841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:14.811 [2024-11-17 01:00:06.836849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.811 [2024-11-17 01:00:06.836938] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:14.811 [2024-11-17 01:00:06.836950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:14.811 [2024-11-17 01:00:06.836959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:14.811 [2024-11-17 01:00:06.836965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.836972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:14.811 [2024-11-17 01:00:06.836978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.836983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:14.811 [2024-11-17 01:00:06.836988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:14.811 [2024-11-17 01:00:06.836993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:14.811 [2024-11-17 01:00:06.836998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:14.811 [2024-11-17 01:00:06.837009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:14.811 [2024-11-17 01:00:06.837014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:14.811 [2024-11-17 01:00:06.837028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:14.811 [2024-11-17 01:00:06.837034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:14.811 [2024-11-17 01:00:06.837043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:14.811 [2024-11-17 01:00:06.837051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:14.811 [2024-11-17 01:00:06.837061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:14.811 [2024-11-17 01:00:06.837065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:14.811 [2024-11-17 01:00:06.837070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:14.811 [2024-11-17 01:00:06.837075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:14.811 [2024-11-17 01:00:06.837080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:14.811 [2024-11-17 01:00:06.837085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:14.811 [2024-11-17 01:00:06.837089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:14.811 [2024-11-17 01:00:06.837094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:14.811 [2024-11-17 01:00:06.837099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:14.811 [2024-11-17 01:00:06.837104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:14.811 [2024-11-17 01:00:06.837110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:14.811 [2024-11-17 01:00:06.837115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:14.811 [2024-11-17 01:00:06.837121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:14.811 [2024-11-17 01:00:06.837126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:14.811 [2024-11-17 01:00:06.837139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:14.811 [2024-11-17 01:00:06.837145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:14.811 [2024-11-17 01:00:06.837157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:14.811 [2024-11-17 01:00:06.837175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:14.811 [2024-11-17 01:00:06.837180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837189] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:14.811 [2024-11-17 01:00:06.837196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:14.811 [2024-11-17 01:00:06.837202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:14.811 [2024-11-17 01:00:06.837208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:14.811 [2024-11-17 01:00:06.837215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:14.811 [2024-11-17 01:00:06.837220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:14.811 [2024-11-17 01:00:06.837226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:14.811 [2024-11-17 01:00:06.837235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:14.811 [2024-11-17 01:00:06.837240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:14.811 [2024-11-17 01:00:06.837247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:14.811 [2024-11-17 01:00:06.837254] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:14.811 [2024-11-17 01:00:06.837261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:14.811 [2024-11-17 01:00:06.837268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:14.811 [2024-11-17 01:00:06.837274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:14.811 [2024-11-17 01:00:06.837280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:14.811 [2024-11-17 01:00:06.837287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:14.811 [2024-11-17 01:00:06.837294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:14.811 [2024-11-17 01:00:06.837300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:14.811 [2024-11-17 01:00:06.837306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:14.811 [2024-11-17 01:00:06.837312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:14.811 [2024-11-17 01:00:06.837319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:14.811 [2024-11-17 01:00:06.837325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:14.811 [2024-11-17 01:00:06.837331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:14.811 [2024-11-17 01:00:06.837339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:14.812 [2024-11-17 01:00:06.837345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:14.812 [2024-11-17 01:00:06.837351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:14.812 [2024-11-17 01:00:06.837370] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:14.812 [2024-11-17 01:00:06.837377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:14.812 [2024-11-17 01:00:06.837386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:14.812 [2024-11-17 01:00:06.837393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:14.812 [2024-11-17 01:00:06.837399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:14.812 [2024-11-17 01:00:06.837406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:14.812 [2024-11-17 01:00:06.837416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.812 [2024-11-17 01:00:06.837422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:14.812 [2024-11-17 01:00:06.837430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:28:14.812 [2024-11-17 01:00:06.837437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.812 [2024-11-17 01:00:06.837469] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:14.812 [2024-11-17 01:00:06.837477] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:18.120 [2024-11-17 01:00:10.067606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.067717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:18.120 [2024-11-17 01:00:10.067734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3230.121 ms 00:28:18.120 [2024-11-17 01:00:10.067745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.081583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.081639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:18.120 [2024-11-17 01:00:10.081654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.710 ms 00:28:18.120 [2024-11-17 01:00:10.081663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.081743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.081754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:18.120 [2024-11-17 01:00:10.081764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:18.120 [2024-11-17 01:00:10.081773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.102052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.102114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:18.120 [2024-11-17 01:00:10.102128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.226 ms 00:28:18.120 [2024-11-17 01:00:10.102137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.102190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.102200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:18.120 [2024-11-17 01:00:10.102209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:18.120 [2024-11-17 01:00:10.102217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.102837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.102871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:18.120 [2024-11-17 01:00:10.102883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:28:18.120 [2024-11-17 01:00:10.102892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.102964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.102976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:18.120 [2024-11-17 01:00:10.102987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:18.120 [2024-11-17 01:00:10.102996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.111888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.111939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:18.120 [2024-11-17 01:00:10.111951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.867 ms 00:28:18.120 [2024-11-17 01:00:10.111959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.115883] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:18.120 [2024-11-17 01:00:10.115948] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:18.120 [2024-11-17 01:00:10.115962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.115972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:18.120 [2024-11-17 01:00:10.115982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.889 ms 00:28:18.120 [2024-11-17 01:00:10.115990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.121304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.121371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:18.120 [2024-11-17 01:00:10.121384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.251 ms 00:28:18.120 [2024-11-17 01:00:10.121393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.123934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.123984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:18.120 [2024-11-17 01:00:10.123994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.479 ms 00:28:18.120 [2024-11-17 01:00:10.124002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.126682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.126733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:18.120 [2024-11-17 01:00:10.126744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.633 ms 00:28:18.120 [2024-11-17 01:00:10.126751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.127143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.127157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:18.120 [2024-11-17 01:00:10.127167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:28:18.120 [2024-11-17 01:00:10.127175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.150442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.150520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:18.120 [2024-11-17 01:00:10.150534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.245 ms 00:28:18.120 [2024-11-17 01:00:10.150542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.158818] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:18.120 [2024-11-17 01:00:10.159833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.159884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:18.120 [2024-11-17 01:00:10.159900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.231 ms 00:28:18.120 [2024-11-17 01:00:10.159913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.159996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.160008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:18.120 [2024-11-17 01:00:10.160017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:18.120 [2024-11-17 01:00:10.160026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.160075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.160087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:18.120 [2024-11-17 01:00:10.160102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:18.120 [2024-11-17 01:00:10.160112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.160137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.160149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:18.120 [2024-11-17 01:00:10.160157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:18.120 [2024-11-17 01:00:10.160168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.160207] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:18.120 [2024-11-17 01:00:10.160217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.160225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:18.120 [2024-11-17 01:00:10.160234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:18.120 [2024-11-17 01:00:10.160242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.165515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.165568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:18.120 [2024-11-17 01:00:10.165579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.248 ms 00:28:18.120 [2024-11-17 01:00:10.165588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.165688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.120 [2024-11-17 01:00:10.165701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:18.120 [2024-11-17 01:00:10.165710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:18.120 [2024-11-17 01:00:10.165719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.120 [2024-11-17 01:00:10.166932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3339.013 ms, result 0 00:28:18.120 [2024-11-17 01:00:10.180313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.381 [2024-11-17 01:00:10.196295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:18.381 [2024-11-17 01:00:10.204434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:18.643 01:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.643 01:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:18.643 01:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:18.643 01:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:18.643 01:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:18.904 [2024-11-17 01:00:10.728789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.904 [2024-11-17 01:00:10.728849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:18.904 [2024-11-17 01:00:10.728869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:18.904 [2024-11-17 01:00:10.728878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.904 [2024-11-17 01:00:10.728915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.904 [2024-11-17 01:00:10.728925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:18.904 [2024-11-17 01:00:10.728939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:18.904 [2024-11-17 01:00:10.728947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.904 [2024-11-17 01:00:10.728970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.904 [2024-11-17 01:00:10.728979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:18.904 [2024-11-17 01:00:10.728989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:18.904 [2024-11-17 01:00:10.728996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.904 [2024-11-17 01:00:10.729057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:28:18.904 true 00:28:18.904 01:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:18.904 { 00:28:18.904 "name": "ftl", 00:28:18.904 "properties": [ 00:28:18.904 { 00:28:18.904 "name": "superblock_version", 00:28:18.904 "value": 5, 00:28:18.904 "read-only": true 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "name": "base_device", 00:28:18.904 "bands": [ 00:28:18.904 { 00:28:18.904 "id": 0, 00:28:18.904 "state": "CLOSED", 00:28:18.904 "validity": 1.0 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 1, 00:28:18.904 "state": "CLOSED", 00:28:18.904 "validity": 1.0 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 2, 00:28:18.904 "state": "CLOSED", 00:28:18.904 "validity": 0.007843137254901933 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 3, 00:28:18.904 "state": "FREE", 00:28:18.904 "validity": 0.0 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 4, 00:28:18.904 "state": "FREE", 00:28:18.904 "validity": 0.0 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 5, 00:28:18.904 "state": "FREE", 00:28:18.904 "validity": 0.0 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 6, 00:28:18.904 "state": "FREE", 00:28:18.904 "validity": 0.0 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 7, 00:28:18.904 "state": "FREE", 00:28:18.904 "validity": 0.0 00:28:18.904 }, 00:28:18.904 { 00:28:18.904 "id": 8, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 9, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 10, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 11, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 12, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 13, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 14, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 15, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 16, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 17, 00:28:18.905 "state": "FREE", 00:28:18.905 "validity": 0.0 00:28:18.905 } 00:28:18.905 ], 00:28:18.905 "read-only": true 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "name": "cache_device", 00:28:18.905 "type": "bdev", 00:28:18.905 "chunks": [ 00:28:18.905 { 00:28:18.905 "id": 0, 00:28:18.905 "state": "INACTIVE", 00:28:18.905 "utilization": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 1, 00:28:18.905 "state": "OPEN", 00:28:18.905 "utilization": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 2, 00:28:18.905 "state": "OPEN", 00:28:18.905 "utilization": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 3, 00:28:18.905 "state": "FREE", 00:28:18.905 "utilization": 0.0 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "id": 4, 00:28:18.905 "state": "FREE", 00:28:18.905 "utilization": 0.0 00:28:18.905 } 00:28:18.905 ], 00:28:18.905 "read-only": true 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "name": "verbose_mode", 00:28:18.905 "value": true, 00:28:18.905 "unit": "", 00:28:18.905 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:18.905 }, 00:28:18.905 { 00:28:18.905 "name": "prep_upgrade_on_shutdown", 00:28:18.905 "value": false, 00:28:18.905 "unit": "", 00:28:18.905 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:18.905 } 00:28:18.905 ] 00:28:18.905 } 00:28:18.905 01:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:18.905 01:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:18.905 01:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:19.166 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:19.166 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:19.166 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:19.166 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:19.166 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.427 Validate MD5 checksum, iteration 1 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:19.427 01:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:19.427 [2024-11-17 01:00:11.488597] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:19.427 [2024-11-17 01:00:11.488790] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93107 ] 00:28:19.687 [2024-11-17 01:00:11.639196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.687 [2024-11-17 01:00:11.710792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.077  [2024-11-17T01:00:14.084Z] Copying: 555/1024 [MB] (555 MBps) [2024-11-17T01:00:14.654Z] Copying: 1024/1024 [MB] (average 549 MBps) 00:28:22.591 00:28:22.591 01:00:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:22.591 01:00:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:25.140 Validate MD5 checksum, iteration 2 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6670e8afe43c615cee048dc7f7974aca 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6670e8afe43c615cee048dc7f7974aca != \6\6\7\0\e\8\a\f\e\4\3\c\6\1\5\c\e\e\0\4\8\d\c\7\f\7\9\7\4\a\c\a ]] 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:25.140 01:00:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:25.140 [2024-11-17 01:00:16.965462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:25.140 [2024-11-17 01:00:16.965571] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93167 ] 00:28:25.140 [2024-11-17 01:00:17.111999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.140 [2024-11-17 01:00:17.150956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.525  [2024-11-17T01:00:19.220Z] Copying: 644/1024 [MB] (644 MBps) [2024-11-17T01:00:19.815Z] Copying: 1024/1024 [MB] (average 643 MBps) 00:28:27.752 00:28:27.752 01:00:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:27.752 01:00:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8820013dc5a45d8abe6f3b706ba7b72e 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8820013dc5a45d8abe6f3b706ba7b72e != \8\8\2\0\0\1\3\d\c\5\a\4\5\d\8\a\b\e\6\f\3\b\7\0\6\b\a\7\b\7\2\e ]] 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93038 ]] 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93038 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93226 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93226 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93226 ']' 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.302 01:00:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.302 [2024-11-17 01:00:21.843880] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:30.302 [2024-11-17 01:00:21.843993] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93226 ] 00:28:30.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 93038 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:30.302 [2024-11-17 01:00:21.992086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.302 [2024-11-17 01:00:22.046455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.302 [2024-11-17 01:00:22.344300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:30.302 [2024-11-17 01:00:22.344369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:30.563 [2024-11-17 01:00:22.486248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.486283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:30.563 [2024-11-17 01:00:22.486297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:30.563 [2024-11-17 01:00:22.486303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.486351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.486369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:30.563 [2024-11-17 01:00:22.486376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:30.563 [2024-11-17 01:00:22.486383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.486402] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:30.563 [2024-11-17 01:00:22.486837] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:30.563 [2024-11-17 01:00:22.486874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.486881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:30.563 [2024-11-17 01:00:22.486894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.478 ms 00:28:30.563 [2024-11-17 01:00:22.486900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.487118] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:30.563 [2024-11-17 01:00:22.491760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.491789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:30.563 [2024-11-17 01:00:22.491797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.642 ms 00:28:30.563 [2024-11-17 01:00:22.491808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.492783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.492808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:30.563 [2024-11-17 01:00:22.492817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:30.563 [2024-11-17 01:00:22.492823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.493042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.493051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:30.563 [2024-11-17 01:00:22.493060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:28:30.563 [2024-11-17 01:00:22.493066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.493097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.493104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:30.563 [2024-11-17 01:00:22.493110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:30.563 [2024-11-17 01:00:22.493119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.493143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.493150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:30.563 [2024-11-17 01:00:22.493157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:30.563 [2024-11-17 01:00:22.493168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.493186] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:30.563 [2024-11-17 01:00:22.493990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.494013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:30.563 [2024-11-17 01:00:22.494020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.809 ms 00:28:30.563 [2024-11-17 01:00:22.494026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.494046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.494053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:30.563 [2024-11-17 01:00:22.494059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:30.563 [2024-11-17 01:00:22.494068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.494092] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:30.563 [2024-11-17 01:00:22.494109] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:30.563 [2024-11-17 01:00:22.494137] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:30.563 [2024-11-17 01:00:22.494148] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:30.563 [2024-11-17 01:00:22.494232] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:30.563 [2024-11-17 01:00:22.494246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:30.563 [2024-11-17 01:00:22.494257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:30.563 [2024-11-17 01:00:22.494266] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:30.563 [2024-11-17 01:00:22.494273] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:30.563 [2024-11-17 01:00:22.494279] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:30.563 [2024-11-17 01:00:22.494285] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:30.563 [2024-11-17 01:00:22.494290] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:30.563 [2024-11-17 01:00:22.494296] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:30.563 [2024-11-17 01:00:22.494302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.494309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:30.563 [2024-11-17 01:00:22.494315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:28:30.563 [2024-11-17 01:00:22.494320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.494405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.563 [2024-11-17 01:00:22.494418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:30.563 [2024-11-17 01:00:22.494425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:30.563 [2024-11-17 01:00:22.494433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.563 [2024-11-17 01:00:22.494524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:30.563 [2024-11-17 01:00:22.494537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:30.563 [2024-11-17 01:00:22.494544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:30.563 [2024-11-17 01:00:22.494550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.563 [2024-11-17 01:00:22.494556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:30.563 [2024-11-17 01:00:22.494561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:30.563 [2024-11-17 01:00:22.494568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:30.563 [2024-11-17 01:00:22.494574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:30.563 [2024-11-17 01:00:22.494579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:30.563 [2024-11-17 01:00:22.494584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.563 [2024-11-17 01:00:22.494589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:30.563 [2024-11-17 01:00:22.494595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:30.563 [2024-11-17 01:00:22.494606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.563 [2024-11-17 01:00:22.494615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:30.563 [2024-11-17 01:00:22.494621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:30.563 [2024-11-17 01:00:22.494629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.563 [2024-11-17 01:00:22.494635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:30.563 [2024-11-17 01:00:22.494641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:30.563 [2024-11-17 01:00:22.494647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.563 [2024-11-17 01:00:22.494653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:30.563 [2024-11-17 01:00:22.494659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:30.563 [2024-11-17 01:00:22.494665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:30.563 [2024-11-17 01:00:22.494671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:30.563 [2024-11-17 01:00:22.494678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:30.563 [2024-11-17 01:00:22.494684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:30.563 [2024-11-17 01:00:22.494690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:30.563 [2024-11-17 01:00:22.494696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:30.563 [2024-11-17 01:00:22.494701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:30.563 [2024-11-17 01:00:22.494707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:30.563 [2024-11-17 01:00:22.494713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:30.563 [2024-11-17 01:00:22.494719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:30.563 [2024-11-17 01:00:22.494727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:30.564 [2024-11-17 01:00:22.494732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:30.564 [2024-11-17 01:00:22.494739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.564 [2024-11-17 01:00:22.494745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:30.564 [2024-11-17 01:00:22.494751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:30.564 [2024-11-17 01:00:22.494756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.564 [2024-11-17 01:00:22.494764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:30.564 [2024-11-17 01:00:22.494770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:30.564 [2024-11-17 01:00:22.494776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.564 [2024-11-17 01:00:22.494781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:30.564 [2024-11-17 01:00:22.494787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:30.564 [2024-11-17 01:00:22.494793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.564 [2024-11-17 01:00:22.494798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:30.564 [2024-11-17 01:00:22.494807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:30.564 [2024-11-17 01:00:22.494816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:30.564 [2024-11-17 01:00:22.494822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:30.564 [2024-11-17 01:00:22.494831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:30.564 [2024-11-17 01:00:22.494840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:30.564 [2024-11-17 01:00:22.494846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:30.564 [2024-11-17 01:00:22.494852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:30.564 [2024-11-17 01:00:22.494858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:30.564 [2024-11-17 01:00:22.494864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:30.564 [2024-11-17 01:00:22.494871] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:30.564 [2024-11-17 01:00:22.494880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:30.564 [2024-11-17 01:00:22.494894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:30.564 [2024-11-17 01:00:22.494912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:30.564 [2024-11-17 01:00:22.494919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:30.564 [2024-11-17 01:00:22.494925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:30.564 [2024-11-17 01:00:22.494931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:30.564 [2024-11-17 01:00:22.494979] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:30.564 [2024-11-17 01:00:22.494986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:30.564 [2024-11-17 01:00:22.494999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:30.564 [2024-11-17 01:00:22.495005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:30.564 [2024-11-17 01:00:22.495012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:30.564 [2024-11-17 01:00:22.495018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.495024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:30.564 [2024-11-17 01:00:22.495029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:28:30.564 [2024-11-17 01:00:22.495034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.503688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.503715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:30.564 [2024-11-17 01:00:22.503726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.613 ms 00:28:30.564 [2024-11-17 01:00:22.503732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.503760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.503768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:30.564 [2024-11-17 01:00:22.503775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:30.564 [2024-11-17 01:00:22.503781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.520829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.520866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:30.564 [2024-11-17 01:00:22.520876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.012 ms 00:28:30.564 [2024-11-17 01:00:22.520882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.520920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.520927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:30.564 [2024-11-17 01:00:22.520934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:30.564 [2024-11-17 01:00:22.520940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.521031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.521040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:30.564 [2024-11-17 01:00:22.521050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:30.564 [2024-11-17 01:00:22.521059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.521092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.521102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:30.564 [2024-11-17 01:00:22.521108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:30.564 [2024-11-17 01:00:22.521114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.528101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.528144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:30.564 [2024-11-17 01:00:22.528156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.968 ms 00:28:30.564 [2024-11-17 01:00:22.528167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.528275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.528291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:30.564 [2024-11-17 01:00:22.528304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:30.564 [2024-11-17 01:00:22.528319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.533815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.533857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:30.564 [2024-11-17 01:00:22.533870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.464 ms 00:28:30.564 [2024-11-17 01:00:22.533880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.535505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.535539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:30.564 [2024-11-17 01:00:22.535552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:28:30.564 [2024-11-17 01:00:22.535562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.553794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.553826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:30.564 [2024-11-17 01:00:22.553836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.202 ms 00:28:30.564 [2024-11-17 01:00:22.553842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.553957] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:30.564 [2024-11-17 01:00:22.554043] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:30.564 [2024-11-17 01:00:22.554125] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:30.564 [2024-11-17 01:00:22.554210] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:30.564 [2024-11-17 01:00:22.554223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.564 [2024-11-17 01:00:22.554230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:30.564 [2024-11-17 01:00:22.554241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.348 ms 00:28:30.564 [2024-11-17 01:00:22.554248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.564 [2024-11-17 01:00:22.554277] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:30.565 [2024-11-17 01:00:22.554289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.565 [2024-11-17 01:00:22.554296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:30.565 [2024-11-17 01:00:22.554302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:30.565 [2024-11-17 01:00:22.554311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.565 [2024-11-17 01:00:22.557541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.565 [2024-11-17 01:00:22.557570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:30.565 [2024-11-17 01:00:22.557579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.213 ms 00:28:30.565 [2024-11-17 01:00:22.557585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.565 [2024-11-17 01:00:22.558148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.565 [2024-11-17 01:00:22.558173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:30.565 [2024-11-17 01:00:22.558185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:30.565 [2024-11-17 01:00:22.558191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.565 [2024-11-17 01:00:22.558242] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:30.565 [2024-11-17 01:00:22.558420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.565 [2024-11-17 01:00:22.558435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:30.565 [2024-11-17 01:00:22.558442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:28:30.565 [2024-11-17 01:00:22.558448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.135 [2024-11-17 01:00:23.106976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.135 [2024-11-17 01:00:23.107019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:31.135 [2024-11-17 01:00:23.107030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 548.283 ms 00:28:31.135 [2024-11-17 01:00:23.107037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-17 01:00:23.108395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-17 01:00:23.108423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:31.136 [2024-11-17 01:00:23.108432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.057 ms 00:28:31.136 [2024-11-17 01:00:23.108440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-17 01:00:23.109086] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:31.136 [2024-11-17 01:00:23.109116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-17 01:00:23.109123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:31.136 [2024-11-17 01:00:23.109132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:28:31.136 [2024-11-17 01:00:23.109138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-17 01:00:23.109163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-17 01:00:23.109172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:31.136 [2024-11-17 01:00:23.109178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:31.136 [2024-11-17 01:00:23.109189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-17 01:00:23.109221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 550.971 ms, result 0 00:28:31.136 [2024-11-17 01:00:23.109251] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:31.136 [2024-11-17 01:00:23.109368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-17 01:00:23.109379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:31.136 [2024-11-17 01:00:23.109386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.117 ms 00:28:31.136 [2024-11-17 01:00:23.109392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.650527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.650572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:31.709 [2024-11-17 01:00:23.650585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 540.810 ms 00:28:31.709 [2024-11-17 01:00:23.650593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.652329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.652378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:31.709 [2024-11-17 01:00:23.652390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.464 ms 00:28:31.709 [2024-11-17 01:00:23.652398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.652984] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:31.709 [2024-11-17 01:00:23.653045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.653054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:31.709 [2024-11-17 01:00:23.653064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.628 ms 00:28:31.709 [2024-11-17 01:00:23.653072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.653092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.653100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:31.709 [2024-11-17 01:00:23.653108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:31.709 [2024-11-17 01:00:23.653115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.653150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 543.889 ms, result 0 00:28:31.709 [2024-11-17 01:00:23.653191] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:31.709 [2024-11-17 01:00:23.653202] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:31.709 [2024-11-17 01:00:23.653212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.653220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:31.709 [2024-11-17 01:00:23.653228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1094.981 ms 00:28:31.709 [2024-11-17 01:00:23.653236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.653271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.653283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:31.709 [2024-11-17 01:00:23.653291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:31.709 [2024-11-17 01:00:23.653299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.661322] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:31.709 [2024-11-17 01:00:23.661425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.661436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:31.709 [2024-11-17 01:00:23.661445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.110 ms 00:28:31.709 [2024-11-17 01:00:23.661453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.662149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.662173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:31.709 [2024-11-17 01:00:23.662182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.619 ms 00:28:31.709 [2024-11-17 01:00:23.662189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.664447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.664468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:31.709 [2024-11-17 01:00:23.664477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.242 ms 00:28:31.709 [2024-11-17 01:00:23.664492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.664528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.664537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:31.709 [2024-11-17 01:00:23.664546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:31.709 [2024-11-17 01:00:23.664553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.664661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.664672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:31.709 [2024-11-17 01:00:23.664680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:31.709 [2024-11-17 01:00:23.664688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.664711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.664719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:31.709 [2024-11-17 01:00:23.664727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:31.709 [2024-11-17 01:00:23.664735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.664763] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:31.709 [2024-11-17 01:00:23.664775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.664783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:31.709 [2024-11-17 01:00:23.664791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:31.709 [2024-11-17 01:00:23.664799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.664858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.709 [2024-11-17 01:00:23.664873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:31.709 [2024-11-17 01:00:23.664881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:31.709 [2024-11-17 01:00:23.664889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.709 [2024-11-17 01:00:23.665913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1179.237 ms, result 0 00:28:31.709 [2024-11-17 01:00:23.678284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.710 [2024-11-17 01:00:23.694292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:31.710 [2024-11-17 01:00:23.702448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:32.653 Validate MD5 checksum, iteration 1 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:32.653 01:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:32.653 [2024-11-17 01:00:24.476459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:32.653 [2024-11-17 01:00:24.476580] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93255 ] 00:28:32.653 [2024-11-17 01:00:24.627317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.653 [2024-11-17 01:00:24.660815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.040  [2024-11-17T01:00:27.041Z] Copying: 571/1024 [MB] (571 MBps) [2024-11-17T01:00:27.611Z] Copying: 1024/1024 [MB] (average 571 MBps) 00:28:35.548 00:28:35.548 01:00:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:35.548 01:00:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:38.096 Validate MD5 checksum, iteration 2 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6670e8afe43c615cee048dc7f7974aca 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6670e8afe43c615cee048dc7f7974aca != \6\6\7\0\e\8\a\f\e\4\3\c\6\1\5\c\e\e\0\4\8\d\c\7\f\7\9\7\4\a\c\a ]] 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:38.096 01:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:38.096 [2024-11-17 01:00:29.657385] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:38.096 [2024-11-17 01:00:29.657636] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93317 ] 00:28:38.096 [2024-11-17 01:00:29.805143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.096 [2024-11-17 01:00:29.837673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.483  [2024-11-17T01:00:32.115Z] Copying: 578/1024 [MB] (578 MBps) [2024-11-17T01:00:32.377Z] Copying: 1024/1024 [MB] (average 556 MBps) 00:28:40.314 00:28:40.314 01:00:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:40.314 01:00:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:42.856 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8820013dc5a45d8abe6f3b706ba7b72e 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8820013dc5a45d8abe6f3b706ba7b72e != \8\8\2\0\0\1\3\d\c\5\a\4\5\d\8\a\b\e\6\f\3\b\7\0\6\b\a\7\b\7\2\e ]] 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93226 ]] 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93226 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93226 ']' 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93226 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93226 00:28:42.857 killing process with pid 93226 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93226' 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93226 00:28:42.857 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93226 00:28:42.857 [2024-11-17 01:00:34.710333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:42.857 [2024-11-17 01:00:34.714705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.714739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:42.857 [2024-11-17 01:00:34.714750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:42.857 [2024-11-17 01:00:34.714757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.714779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:42.857 [2024-11-17 01:00:34.715294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.715310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:42.857 [2024-11-17 01:00:34.715317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:28:42.857 [2024-11-17 01:00:34.715324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.715545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.715555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:42.857 [2024-11-17 01:00:34.715562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:28:42.857 [2024-11-17 01:00:34.715568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.717054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.717079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:42.857 [2024-11-17 01:00:34.717087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.473 ms 00:28:42.857 [2024-11-17 01:00:34.717093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.717939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.717960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:42.857 [2024-11-17 01:00:34.717969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.820 ms 00:28:42.857 [2024-11-17 01:00:34.717976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.719887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.719917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:42.857 [2024-11-17 01:00:34.719925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.884 ms 00:28:42.857 [2024-11-17 01:00:34.719932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.721251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.721284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:42.857 [2024-11-17 01:00:34.721292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.291 ms 00:28:42.857 [2024-11-17 01:00:34.721299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.721373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.721382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:42.857 [2024-11-17 01:00:34.721389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:42.857 [2024-11-17 01:00:34.721396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.722561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.722586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:42.857 [2024-11-17 01:00:34.722593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.152 ms 00:28:42.857 [2024-11-17 01:00:34.722598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.723932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.723956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:42.857 [2024-11-17 01:00:34.723963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.308 ms 00:28:42.857 [2024-11-17 01:00:34.723968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.725091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.725117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:42.857 [2024-11-17 01:00:34.725125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:28:42.857 [2024-11-17 01:00:34.725130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.726282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.726307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:42.857 [2024-11-17 01:00:34.726314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.102 ms 00:28:42.857 [2024-11-17 01:00:34.726320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.857 [2024-11-17 01:00:34.726343] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:42.857 [2024-11-17 01:00:34.726368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:42.857 [2024-11-17 01:00:34.726381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:42.857 [2024-11-17 01:00:34.726387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:42.857 [2024-11-17 01:00:34.726394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:42.857 [2024-11-17 01:00:34.726486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:42.857 [2024-11-17 01:00:34.726492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8fc3b2c6-088e-4fda-b760-860cf2b58f6c 00:28:42.857 [2024-11-17 01:00:34.726498] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:42.857 [2024-11-17 01:00:34.726503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:42.857 [2024-11-17 01:00:34.726509] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:42.857 [2024-11-17 01:00:34.726515] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:42.857 [2024-11-17 01:00:34.726521] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:42.857 [2024-11-17 01:00:34.726528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:42.857 [2024-11-17 01:00:34.726534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:42.857 [2024-11-17 01:00:34.726539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:42.857 [2024-11-17 01:00:34.726544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:42.857 [2024-11-17 01:00:34.726552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.857 [2024-11-17 01:00:34.726559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:42.857 [2024-11-17 01:00:34.726566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:28:42.858 [2024-11-17 01:00:34.726575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.728214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.858 [2024-11-17 01:00:34.728243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:42.858 [2024-11-17 01:00:34.728250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.617 ms 00:28:42.858 [2024-11-17 01:00:34.728256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.728344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.858 [2024-11-17 01:00:34.728351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:42.858 [2024-11-17 01:00:34.728375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:42.858 [2024-11-17 01:00:34.728381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.734463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.734492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:42.858 [2024-11-17 01:00:34.734500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.734506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.734532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.734540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:42.858 [2024-11-17 01:00:34.734551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.734557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.734616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.734625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:42.858 [2024-11-17 01:00:34.734632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.734638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.734652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.734658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:42.858 [2024-11-17 01:00:34.734664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.734673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.745176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.745215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:42.858 [2024-11-17 01:00:34.745223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.745230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.753636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.753668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:42.858 [2024-11-17 01:00:34.753681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.753688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.753747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.753756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:42.858 [2024-11-17 01:00:34.753763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.753769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.753804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.753811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:42.858 [2024-11-17 01:00:34.753820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.753826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.753890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.753898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:42.858 [2024-11-17 01:00:34.753904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.753910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.753938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.753945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:42.858 [2024-11-17 01:00:34.753952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.753958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.753996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.754003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:42.858 [2024-11-17 01:00:34.754010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.754017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.754055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:42.858 [2024-11-17 01:00:34.754063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:42.858 [2024-11-17 01:00:34.754068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:42.858 [2024-11-17 01:00:34.754075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.858 [2024-11-17 01:00:34.754185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 39.453 ms, result 0 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:43.120 Remove shared memory files 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93038 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:43.120 00:28:43.120 real 1m14.664s 00:28:43.120 user 1m40.147s 00:28:43.120 sys 0m20.473s 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:43.120 ************************************ 00:28:43.120 END TEST ftl_upgrade_shutdown 00:28:43.120 ************************************ 00:28:43.120 01:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:43.120 01:00:35 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:43.120 01:00:35 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:43.120 01:00:35 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:43.120 01:00:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:43.120 01:00:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:43.120 ************************************ 00:28:43.120 START TEST ftl_restore_fast 00:28:43.120 ************************************ 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:43.120 * Looking for test storage... 00:28:43.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:43.120 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.121 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.121 --rc genhtml_branch_coverage=1 00:28:43.121 --rc genhtml_function_coverage=1 00:28:43.121 --rc genhtml_legend=1 00:28:43.121 --rc geninfo_all_blocks=1 00:28:43.121 --rc geninfo_unexecuted_blocks=1 00:28:43.121 00:28:43.121 ' 00:28:43.121 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.121 --rc genhtml_branch_coverage=1 00:28:43.121 --rc genhtml_function_coverage=1 00:28:43.121 --rc genhtml_legend=1 00:28:43.121 --rc geninfo_all_blocks=1 00:28:43.121 --rc geninfo_unexecuted_blocks=1 00:28:43.121 00:28:43.121 ' 00:28:43.121 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.121 --rc genhtml_branch_coverage=1 00:28:43.121 --rc genhtml_function_coverage=1 00:28:43.121 --rc genhtml_legend=1 00:28:43.121 --rc geninfo_all_blocks=1 00:28:43.121 --rc geninfo_unexecuted_blocks=1 00:28:43.121 00:28:43.121 ' 00:28:43.121 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:43.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.121 --rc genhtml_branch_coverage=1 00:28:43.121 --rc genhtml_function_coverage=1 00:28:43.121 --rc genhtml_legend=1 00:28:43.121 --rc geninfo_all_blocks=1 00:28:43.121 --rc geninfo_unexecuted_blocks=1 00:28:43.121 00:28:43.121 ' 00:28:43.121 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:43.121 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HU8uvhVhvV 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93456 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93456 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93456 ']' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:43.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:43.383 01:00:35 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:43.383 [2024-11-17 01:00:35.281122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:43.383 [2024-11-17 01:00:35.281243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93456 ] 00:28:43.383 [2024-11-17 01:00:35.427633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.642 [2024-11-17 01:00:35.476668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:44.213 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:44.474 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:44.735 { 00:28:44.735 "name": "nvme0n1", 00:28:44.735 "aliases": [ 00:28:44.735 "86889a95-349c-4807-99e9-950c202888d4" 00:28:44.735 ], 00:28:44.735 "product_name": "NVMe disk", 00:28:44.735 "block_size": 4096, 00:28:44.735 "num_blocks": 1310720, 00:28:44.735 "uuid": "86889a95-349c-4807-99e9-950c202888d4", 00:28:44.735 "numa_id": -1, 00:28:44.735 "assigned_rate_limits": { 00:28:44.735 "rw_ios_per_sec": 0, 00:28:44.735 "rw_mbytes_per_sec": 0, 00:28:44.735 "r_mbytes_per_sec": 0, 00:28:44.735 "w_mbytes_per_sec": 0 00:28:44.735 }, 00:28:44.735 "claimed": true, 00:28:44.735 "claim_type": "read_many_write_one", 00:28:44.735 "zoned": false, 00:28:44.735 "supported_io_types": { 00:28:44.735 "read": true, 00:28:44.735 "write": true, 00:28:44.735 "unmap": true, 00:28:44.735 "flush": true, 00:28:44.735 "reset": true, 00:28:44.735 "nvme_admin": true, 00:28:44.735 "nvme_io": true, 00:28:44.735 "nvme_io_md": false, 00:28:44.735 "write_zeroes": true, 00:28:44.735 "zcopy": false, 00:28:44.735 "get_zone_info": false, 00:28:44.735 "zone_management": false, 00:28:44.735 "zone_append": false, 00:28:44.735 "compare": true, 00:28:44.735 "compare_and_write": false, 00:28:44.735 "abort": true, 00:28:44.735 "seek_hole": false, 00:28:44.735 "seek_data": false, 00:28:44.735 "copy": true, 00:28:44.735 "nvme_iov_md": false 00:28:44.735 }, 00:28:44.735 "driver_specific": { 00:28:44.735 "nvme": [ 00:28:44.735 { 00:28:44.735 "pci_address": "0000:00:11.0", 00:28:44.735 "trid": { 00:28:44.735 "trtype": "PCIe", 00:28:44.735 "traddr": "0000:00:11.0" 00:28:44.735 }, 00:28:44.735 "ctrlr_data": { 00:28:44.735 "cntlid": 0, 00:28:44.735 "vendor_id": "0x1b36", 00:28:44.735 "model_number": "QEMU NVMe Ctrl", 00:28:44.735 "serial_number": "12341", 00:28:44.735 "firmware_revision": "8.0.0", 00:28:44.735 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:44.735 "oacs": { 00:28:44.735 "security": 0, 00:28:44.735 "format": 1, 00:28:44.735 "firmware": 0, 00:28:44.735 "ns_manage": 1 00:28:44.735 }, 00:28:44.735 "multi_ctrlr": false, 00:28:44.735 "ana_reporting": false 00:28:44.735 }, 00:28:44.735 "vs": { 00:28:44.735 "nvme_version": "1.4" 00:28:44.735 }, 00:28:44.735 "ns_data": { 00:28:44.735 "id": 1, 00:28:44.735 "can_share": false 00:28:44.735 } 00:28:44.735 } 00:28:44.735 ], 00:28:44.735 "mp_policy": "active_passive" 00:28:44.735 } 00:28:44.735 } 00:28:44.735 ]' 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:44.735 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:44.996 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=ecae4484-7441-4719-83ff-3795ecfb10e9 00:28:44.996 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:44.996 01:00:36 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ecae4484-7441-4719-83ff-3795ecfb10e9 00:28:44.996 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:45.258 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=6c62c57d-bb29-4e68-9082-15c9fe239791 00:28:45.258 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6c62c57d-bb29-4e68-9082-15c9fe239791 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:45.519 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:45.781 { 00:28:45.781 "name": "33bf1b5c-ac14-4689-af03-7406c91c320b", 00:28:45.781 "aliases": [ 00:28:45.781 "lvs/nvme0n1p0" 00:28:45.781 ], 00:28:45.781 "product_name": "Logical Volume", 00:28:45.781 "block_size": 4096, 00:28:45.781 "num_blocks": 26476544, 00:28:45.781 "uuid": "33bf1b5c-ac14-4689-af03-7406c91c320b", 00:28:45.781 "assigned_rate_limits": { 00:28:45.781 "rw_ios_per_sec": 0, 00:28:45.781 "rw_mbytes_per_sec": 0, 00:28:45.781 "r_mbytes_per_sec": 0, 00:28:45.781 "w_mbytes_per_sec": 0 00:28:45.781 }, 00:28:45.781 "claimed": false, 00:28:45.781 "zoned": false, 00:28:45.781 "supported_io_types": { 00:28:45.781 "read": true, 00:28:45.781 "write": true, 00:28:45.781 "unmap": true, 00:28:45.781 "flush": false, 00:28:45.781 "reset": true, 00:28:45.781 "nvme_admin": false, 00:28:45.781 "nvme_io": false, 00:28:45.781 "nvme_io_md": false, 00:28:45.781 "write_zeroes": true, 00:28:45.781 "zcopy": false, 00:28:45.781 "get_zone_info": false, 00:28:45.781 "zone_management": false, 00:28:45.781 "zone_append": false, 00:28:45.781 "compare": false, 00:28:45.781 "compare_and_write": false, 00:28:45.781 "abort": false, 00:28:45.781 "seek_hole": true, 00:28:45.781 "seek_data": true, 00:28:45.781 "copy": false, 00:28:45.781 "nvme_iov_md": false 00:28:45.781 }, 00:28:45.781 "driver_specific": { 00:28:45.781 "lvol": { 00:28:45.781 "lvol_store_uuid": "6c62c57d-bb29-4e68-9082-15c9fe239791", 00:28:45.781 "base_bdev": "nvme0n1", 00:28:45.781 "thin_provision": true, 00:28:45.781 "num_allocated_clusters": 0, 00:28:45.781 "snapshot": false, 00:28:45.781 "clone": false, 00:28:45.781 "esnap_clone": false 00:28:45.781 } 00:28:45.781 } 00:28:45.781 } 00:28:45.781 ]' 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:45.781 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:46.041 01:00:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:46.301 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:46.301 { 00:28:46.301 "name": "33bf1b5c-ac14-4689-af03-7406c91c320b", 00:28:46.301 "aliases": [ 00:28:46.301 "lvs/nvme0n1p0" 00:28:46.301 ], 00:28:46.301 "product_name": "Logical Volume", 00:28:46.301 "block_size": 4096, 00:28:46.301 "num_blocks": 26476544, 00:28:46.301 "uuid": "33bf1b5c-ac14-4689-af03-7406c91c320b", 00:28:46.301 "assigned_rate_limits": { 00:28:46.301 "rw_ios_per_sec": 0, 00:28:46.301 "rw_mbytes_per_sec": 0, 00:28:46.301 "r_mbytes_per_sec": 0, 00:28:46.301 "w_mbytes_per_sec": 0 00:28:46.301 }, 00:28:46.301 "claimed": false, 00:28:46.301 "zoned": false, 00:28:46.301 "supported_io_types": { 00:28:46.301 "read": true, 00:28:46.301 "write": true, 00:28:46.301 "unmap": true, 00:28:46.301 "flush": false, 00:28:46.301 "reset": true, 00:28:46.301 "nvme_admin": false, 00:28:46.301 "nvme_io": false, 00:28:46.301 "nvme_io_md": false, 00:28:46.301 "write_zeroes": true, 00:28:46.301 "zcopy": false, 00:28:46.301 "get_zone_info": false, 00:28:46.301 "zone_management": false, 00:28:46.301 "zone_append": false, 00:28:46.301 "compare": false, 00:28:46.301 "compare_and_write": false, 00:28:46.301 "abort": false, 00:28:46.301 "seek_hole": true, 00:28:46.301 "seek_data": true, 00:28:46.301 "copy": false, 00:28:46.301 "nvme_iov_md": false 00:28:46.301 }, 00:28:46.301 "driver_specific": { 00:28:46.301 "lvol": { 00:28:46.301 "lvol_store_uuid": "6c62c57d-bb29-4e68-9082-15c9fe239791", 00:28:46.301 "base_bdev": "nvme0n1", 00:28:46.301 "thin_provision": true, 00:28:46.301 "num_allocated_clusters": 0, 00:28:46.301 "snapshot": false, 00:28:46.301 "clone": false, 00:28:46.301 "esnap_clone": false 00:28:46.301 } 00:28:46.301 } 00:28:46.301 } 00:28:46.301 ]' 00:28:46.301 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:46.301 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:46.301 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:46.302 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:46.302 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:46.302 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:46.302 01:00:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:46.302 01:00:38 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:46.563 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:46.563 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:46.563 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:46.563 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:46.563 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:46.563 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:46.563 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 33bf1b5c-ac14-4689-af03-7406c91c320b 00:28:46.824 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:46.824 { 00:28:46.824 "name": "33bf1b5c-ac14-4689-af03-7406c91c320b", 00:28:46.824 "aliases": [ 00:28:46.824 "lvs/nvme0n1p0" 00:28:46.824 ], 00:28:46.824 "product_name": "Logical Volume", 00:28:46.824 "block_size": 4096, 00:28:46.825 "num_blocks": 26476544, 00:28:46.825 "uuid": "33bf1b5c-ac14-4689-af03-7406c91c320b", 00:28:46.825 "assigned_rate_limits": { 00:28:46.825 "rw_ios_per_sec": 0, 00:28:46.825 "rw_mbytes_per_sec": 0, 00:28:46.825 "r_mbytes_per_sec": 0, 00:28:46.825 "w_mbytes_per_sec": 0 00:28:46.825 }, 00:28:46.825 "claimed": false, 00:28:46.825 "zoned": false, 00:28:46.825 "supported_io_types": { 00:28:46.825 "read": true, 00:28:46.825 "write": true, 00:28:46.825 "unmap": true, 00:28:46.825 "flush": false, 00:28:46.825 "reset": true, 00:28:46.825 "nvme_admin": false, 00:28:46.825 "nvme_io": false, 00:28:46.825 "nvme_io_md": false, 00:28:46.825 "write_zeroes": true, 00:28:46.825 "zcopy": false, 00:28:46.825 "get_zone_info": false, 00:28:46.825 "zone_management": false, 00:28:46.825 "zone_append": false, 00:28:46.825 "compare": false, 00:28:46.825 "compare_and_write": false, 00:28:46.825 "abort": false, 00:28:46.825 "seek_hole": true, 00:28:46.825 "seek_data": true, 00:28:46.825 "copy": false, 00:28:46.825 "nvme_iov_md": false 00:28:46.825 }, 00:28:46.825 "driver_specific": { 00:28:46.825 "lvol": { 00:28:46.825 "lvol_store_uuid": "6c62c57d-bb29-4e68-9082-15c9fe239791", 00:28:46.825 "base_bdev": "nvme0n1", 00:28:46.825 "thin_provision": true, 00:28:46.825 "num_allocated_clusters": 0, 00:28:46.825 "snapshot": false, 00:28:46.825 "clone": false, 00:28:46.825 "esnap_clone": false 00:28:46.825 } 00:28:46.825 } 00:28:46.825 } 00:28:46.825 ]' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 33bf1b5c-ac14-4689-af03-7406c91c320b --l2p_dram_limit 10' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:46.825 01:00:38 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 33bf1b5c-ac14-4689-af03-7406c91c320b --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:47.087 [2024-11-17 01:00:38.892440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.892481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:47.087 [2024-11-17 01:00:38.892495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:47.087 [2024-11-17 01:00:38.892504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.892540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.892550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.087 [2024-11-17 01:00:38.892556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:47.087 [2024-11-17 01:00:38.892568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.892588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:47.087 [2024-11-17 01:00:38.892765] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:47.087 [2024-11-17 01:00:38.892777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.892786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.087 [2024-11-17 01:00:38.892794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:28:47.087 [2024-11-17 01:00:38.892803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.892825] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dafe0f70-ba94-4e9f-86ae-6e462132ec15 00:28:47.087 [2024-11-17 01:00:38.894107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.894131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:47.087 [2024-11-17 01:00:38.894141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:47.087 [2024-11-17 01:00:38.894148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.901153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.901178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.087 [2024-11-17 01:00:38.901190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:28:47.087 [2024-11-17 01:00:38.901196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.901289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.901298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.087 [2024-11-17 01:00:38.901307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:47.087 [2024-11-17 01:00:38.901317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.901347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.901366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:47.087 [2024-11-17 01:00:38.901375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:47.087 [2024-11-17 01:00:38.901381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.901399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:47.087 [2024-11-17 01:00:38.903029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.903055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.087 [2024-11-17 01:00:38.903064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:28:47.087 [2024-11-17 01:00:38.903072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.903099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.903111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:47.087 [2024-11-17 01:00:38.903117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:47.087 [2024-11-17 01:00:38.903127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.903139] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:47.087 [2024-11-17 01:00:38.903260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:47.087 [2024-11-17 01:00:38.903271] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:47.087 [2024-11-17 01:00:38.903281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:47.087 [2024-11-17 01:00:38.903289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:47.087 [2024-11-17 01:00:38.903298] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:47.087 [2024-11-17 01:00:38.903305] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:47.087 [2024-11-17 01:00:38.903317] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:47.087 [2024-11-17 01:00:38.903322] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:47.087 [2024-11-17 01:00:38.903329] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:47.087 [2024-11-17 01:00:38.903340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.903347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:47.087 [2024-11-17 01:00:38.903366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:28:47.087 [2024-11-17 01:00:38.903375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.087 [2024-11-17 01:00:38.903439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.087 [2024-11-17 01:00:38.903456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:47.088 [2024-11-17 01:00:38.903462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:47.088 [2024-11-17 01:00:38.903470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.088 [2024-11-17 01:00:38.903542] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:47.088 [2024-11-17 01:00:38.903557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:47.088 [2024-11-17 01:00:38.903565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:47.088 [2024-11-17 01:00:38.903588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:47.088 [2024-11-17 01:00:38.903607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.088 [2024-11-17 01:00:38.903621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:47.088 [2024-11-17 01:00:38.903628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:47.088 [2024-11-17 01:00:38.903634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.088 [2024-11-17 01:00:38.903643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:47.088 [2024-11-17 01:00:38.903649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:47.088 [2024-11-17 01:00:38.903656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:47.088 [2024-11-17 01:00:38.903668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:47.088 [2024-11-17 01:00:38.903687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:47.088 [2024-11-17 01:00:38.903709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:47.088 [2024-11-17 01:00:38.903727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:47.088 [2024-11-17 01:00:38.903750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:47.088 [2024-11-17 01:00:38.903769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.088 [2024-11-17 01:00:38.903784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:47.088 [2024-11-17 01:00:38.903793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:47.088 [2024-11-17 01:00:38.903799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.088 [2024-11-17 01:00:38.903807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:47.088 [2024-11-17 01:00:38.903812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:47.088 [2024-11-17 01:00:38.903820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:47.088 [2024-11-17 01:00:38.903834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:47.088 [2024-11-17 01:00:38.903839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:47.088 [2024-11-17 01:00:38.903856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:47.088 [2024-11-17 01:00:38.903865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.088 [2024-11-17 01:00:38.903882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:47.088 [2024-11-17 01:00:38.903889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:47.088 [2024-11-17 01:00:38.903896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:47.088 [2024-11-17 01:00:38.903902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:47.088 [2024-11-17 01:00:38.903909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:47.088 [2024-11-17 01:00:38.903916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:47.088 [2024-11-17 01:00:38.903927] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:47.088 [2024-11-17 01:00:38.903936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.088 [2024-11-17 01:00:38.903948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:47.088 [2024-11-17 01:00:38.903955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:47.088 [2024-11-17 01:00:38.903962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:47.088 [2024-11-17 01:00:38.903970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:47.088 [2024-11-17 01:00:38.903978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:47.088 [2024-11-17 01:00:38.903985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:47.088 [2024-11-17 01:00:38.903994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:47.088 [2024-11-17 01:00:38.904001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:47.088 [2024-11-17 01:00:38.904009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:47.088 [2024-11-17 01:00:38.904016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:47.088 [2024-11-17 01:00:38.904025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:47.088 [2024-11-17 01:00:38.904031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:47.088 [2024-11-17 01:00:38.904039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:47.088 [2024-11-17 01:00:38.904045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:47.088 [2024-11-17 01:00:38.904053] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:47.088 [2024-11-17 01:00:38.904062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.088 [2024-11-17 01:00:38.904069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:47.088 [2024-11-17 01:00:38.904075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:47.088 [2024-11-17 01:00:38.904082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:47.088 [2024-11-17 01:00:38.904088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:47.088 [2024-11-17 01:00:38.904097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.088 [2024-11-17 01:00:38.904105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:47.088 [2024-11-17 01:00:38.904119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:28:47.088 [2024-11-17 01:00:38.904125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.088 [2024-11-17 01:00:38.904165] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:47.088 [2024-11-17 01:00:38.904178] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:50.396 [2024-11-17 01:00:42.258083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.396 [2024-11-17 01:00:42.258189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:50.396 [2024-11-17 01:00:42.258218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3353.896 ms 00:28:50.396 [2024-11-17 01:00:42.258228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.396 [2024-11-17 01:00:42.277757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.396 [2024-11-17 01:00:42.277821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:50.396 [2024-11-17 01:00:42.277842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.371 ms 00:28:50.396 [2024-11-17 01:00:42.277853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.277959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.277968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:50.397 [2024-11-17 01:00:42.277986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:50.397 [2024-11-17 01:00:42.277995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.294058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.294116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:50.397 [2024-11-17 01:00:42.294133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.996 ms 00:28:50.397 [2024-11-17 01:00:42.294143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.294189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.294203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:50.397 [2024-11-17 01:00:42.294217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:50.397 [2024-11-17 01:00:42.294226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.294974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.295013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:50.397 [2024-11-17 01:00:42.295026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:28:50.397 [2024-11-17 01:00:42.295035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.295165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.295180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:50.397 [2024-11-17 01:00:42.295196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:28:50.397 [2024-11-17 01:00:42.295211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.323903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.323967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:50.397 [2024-11-17 01:00:42.323987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.661 ms 00:28:50.397 [2024-11-17 01:00:42.323999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.335823] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:50.397 [2024-11-17 01:00:42.340884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.340958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:50.397 [2024-11-17 01:00:42.340972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.760 ms 00:28:50.397 [2024-11-17 01:00:42.340985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.426103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.426166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:50.397 [2024-11-17 01:00:42.426181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.076 ms 00:28:50.397 [2024-11-17 01:00:42.426197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.426458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.426477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:50.397 [2024-11-17 01:00:42.426488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:28:50.397 [2024-11-17 01:00:42.426500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.432488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.432546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:50.397 [2024-11-17 01:00:42.432560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.940 ms 00:28:50.397 [2024-11-17 01:00:42.432572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.437742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.437796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:50.397 [2024-11-17 01:00:42.437808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.115 ms 00:28:50.397 [2024-11-17 01:00:42.437820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-11-17 01:00:42.438173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-11-17 01:00:42.438199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:50.397 [2024-11-17 01:00:42.438216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:28:50.397 [2024-11-17 01:00:42.438230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.659 [2024-11-17 01:00:42.484672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.659 [2024-11-17 01:00:42.484740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:50.659 [2024-11-17 01:00:42.484754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.416 ms 00:28:50.659 [2024-11-17 01:00:42.484766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.659 [2024-11-17 01:00:42.492992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.659 [2024-11-17 01:00:42.493048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:50.659 [2024-11-17 01:00:42.493061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.129 ms 00:28:50.659 [2024-11-17 01:00:42.493074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.659 [2024-11-17 01:00:42.499196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.659 [2024-11-17 01:00:42.499250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:50.659 [2024-11-17 01:00:42.499261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.070 ms 00:28:50.659 [2024-11-17 01:00:42.499272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.659 [2024-11-17 01:00:42.505715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.659 [2024-11-17 01:00:42.505771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:50.659 [2024-11-17 01:00:42.505783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.393 ms 00:28:50.659 [2024-11-17 01:00:42.505797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.659 [2024-11-17 01:00:42.505852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.659 [2024-11-17 01:00:42.505867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:50.659 [2024-11-17 01:00:42.505877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:50.659 [2024-11-17 01:00:42.505895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.659 [2024-11-17 01:00:42.506007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.659 [2024-11-17 01:00:42.506025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:50.659 [2024-11-17 01:00:42.506034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:50.659 [2024-11-17 01:00:42.506053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.659 [2024-11-17 01:00:42.507642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3614.575 ms, result 0 00:28:50.659 { 00:28:50.659 "name": "ftl0", 00:28:50.659 "uuid": "dafe0f70-ba94-4e9f-86ae-6e462132ec15" 00:28:50.659 } 00:28:50.659 01:00:42 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:50.659 01:00:42 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:50.920 01:00:42 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:50.921 01:00:42 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:50.921 [2024-11-17 01:00:42.958601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.958657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:50.921 [2024-11-17 01:00:42.958674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:50.921 [2024-11-17 01:00:42.958683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.921 [2024-11-17 01:00:42.958717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:50.921 [2024-11-17 01:00:42.959728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.959778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:50.921 [2024-11-17 01:00:42.959791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:28:50.921 [2024-11-17 01:00:42.959804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.921 [2024-11-17 01:00:42.960069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.960086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:50.921 [2024-11-17 01:00:42.960095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:28:50.921 [2024-11-17 01:00:42.960107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.921 [2024-11-17 01:00:42.963383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.963418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:50.921 [2024-11-17 01:00:42.963428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:28:50.921 [2024-11-17 01:00:42.963439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.921 [2024-11-17 01:00:42.969684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.969730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:50.921 [2024-11-17 01:00:42.969742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:28:50.921 [2024-11-17 01:00:42.969753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.921 [2024-11-17 01:00:42.972969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.973033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:50.921 [2024-11-17 01:00:42.973043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.123 ms 00:28:50.921 [2024-11-17 01:00:42.973055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.921 [2024-11-17 01:00:42.981001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.981061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:50.921 [2024-11-17 01:00:42.981075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.897 ms 00:28:50.921 [2024-11-17 01:00:42.981088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.921 [2024-11-17 01:00:42.981227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.921 [2024-11-17 01:00:42.981250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:50.921 [2024-11-17 01:00:42.981262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:28:50.921 [2024-11-17 01:00:42.981274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.183 [2024-11-17 01:00:42.984739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.183 [2024-11-17 01:00:42.984793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:51.183 [2024-11-17 01:00:42.984803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:28:51.183 [2024-11-17 01:00:42.984814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.183 [2024-11-17 01:00:42.987875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.183 [2024-11-17 01:00:42.987931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:51.183 [2024-11-17 01:00:42.987941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:28:51.183 [2024-11-17 01:00:42.987952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.183 [2024-11-17 01:00:42.990501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.183 [2024-11-17 01:00:42.990555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:51.183 [2024-11-17 01:00:42.990566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:28:51.183 [2024-11-17 01:00:42.990579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.183 [2024-11-17 01:00:42.992739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.183 [2024-11-17 01:00:42.992794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:51.183 [2024-11-17 01:00:42.992804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:28:51.183 [2024-11-17 01:00:42.992815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.183 [2024-11-17 01:00:42.992859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:51.183 [2024-11-17 01:00:42.992878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:51.183 [2024-11-17 01:00:42.992889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.992998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:51.184 [2024-11-17 01:00:42.993834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:51.185 [2024-11-17 01:00:42.993947] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:51.185 [2024-11-17 01:00:42.993956] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dafe0f70-ba94-4e9f-86ae-6e462132ec15 00:28:51.185 [2024-11-17 01:00:42.993968] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:51.185 [2024-11-17 01:00:42.993977] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:51.185 [2024-11-17 01:00:42.993988] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:51.185 [2024-11-17 01:00:42.993997] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:51.185 [2024-11-17 01:00:42.994007] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:51.185 [2024-11-17 01:00:42.994016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:51.185 [2024-11-17 01:00:42.994026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:51.185 [2024-11-17 01:00:42.994033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:51.185 [2024-11-17 01:00:42.994043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:51.185 [2024-11-17 01:00:42.994052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.185 [2024-11-17 01:00:42.994066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:51.185 [2024-11-17 01:00:42.994075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:28:51.185 [2024-11-17 01:00:42.994087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:42.996624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.185 [2024-11-17 01:00:42.996670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:51.185 [2024-11-17 01:00:42.996683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.507 ms 00:28:51.185 [2024-11-17 01:00:42.996696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:42.996821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.185 [2024-11-17 01:00:42.996835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:51.185 [2024-11-17 01:00:42.996845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:51.185 [2024-11-17 01:00:42.996855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.007618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.007676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:51.185 [2024-11-17 01:00:43.007689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.007701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.007769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.007781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:51.185 [2024-11-17 01:00:43.007790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.007800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.007890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.007909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:51.185 [2024-11-17 01:00:43.007919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.007929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.007949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.007967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:51.185 [2024-11-17 01:00:43.007976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.007987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.027974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.028051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:51.185 [2024-11-17 01:00:43.028068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.028081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.044372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.044440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:51.185 [2024-11-17 01:00:43.044459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.044476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.044576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.044595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:51.185 [2024-11-17 01:00:43.044604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.044617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.044668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.044684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:51.185 [2024-11-17 01:00:43.044695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.044707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.044797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.044813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:51.185 [2024-11-17 01:00:43.044822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.044834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.044879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.044892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:51.185 [2024-11-17 01:00:43.044901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.044933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.044986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.045052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:51.185 [2024-11-17 01:00:43.045063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.045076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.045140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.185 [2024-11-17 01:00:43.045165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:51.185 [2024-11-17 01:00:43.045179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.185 [2024-11-17 01:00:43.045191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.185 [2024-11-17 01:00:43.045443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.747 ms, result 0 00:28:51.185 true 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93456 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93456 ']' 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93456 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93456 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:51.185 killing process with pid 93456 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93456' 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93456 00:28:51.185 01:00:43 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93456 00:28:56.479 01:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:00.690 262144+0 records in 00:29:00.690 262144+0 records out 00:29:00.690 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.41166 s, 243 MB/s 00:29:00.690 01:00:52 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:02.659 01:00:54 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:02.659 [2024-11-17 01:00:54.628853] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:02.659 [2024-11-17 01:00:54.628995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93670 ] 00:29:02.920 [2024-11-17 01:00:54.780548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.920 [2024-11-17 01:00:54.817835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.920 [2024-11-17 01:00:54.924470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.920 [2024-11-17 01:00:54.924552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.184 [2024-11-17 01:00:55.087417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.087489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:03.184 [2024-11-17 01:00:55.087507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:03.184 [2024-11-17 01:00:55.087517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.087576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.087587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.184 [2024-11-17 01:00:55.087601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:03.184 [2024-11-17 01:00:55.087610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.087630] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:03.184 [2024-11-17 01:00:55.087910] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:03.184 [2024-11-17 01:00:55.087927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.087935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.184 [2024-11-17 01:00:55.087947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:29:03.184 [2024-11-17 01:00:55.087958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.090405] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:03.184 [2024-11-17 01:00:55.094508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.094553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:03.184 [2024-11-17 01:00:55.094568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.106 ms 00:29:03.184 [2024-11-17 01:00:55.094577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.094666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.094683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:03.184 [2024-11-17 01:00:55.094697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:03.184 [2024-11-17 01:00:55.094705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.103948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.103998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.184 [2024-11-17 01:00:55.104011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.197 ms 00:29:03.184 [2024-11-17 01:00:55.104028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.104143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.104159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.184 [2024-11-17 01:00:55.104172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:03.184 [2024-11-17 01:00:55.104180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.104246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.104260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:03.184 [2024-11-17 01:00:55.104269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:03.184 [2024-11-17 01:00:55.104276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.104304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:03.184 [2024-11-17 01:00:55.106301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.106342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.184 [2024-11-17 01:00:55.106370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.007 ms 00:29:03.184 [2024-11-17 01:00:55.106378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.106414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.106423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:03.184 [2024-11-17 01:00:55.106432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:03.184 [2024-11-17 01:00:55.106441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.106465] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:03.184 [2024-11-17 01:00:55.106496] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:03.184 [2024-11-17 01:00:55.106533] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:03.184 [2024-11-17 01:00:55.106550] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:03.184 [2024-11-17 01:00:55.106665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:03.184 [2024-11-17 01:00:55.106683] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:03.184 [2024-11-17 01:00:55.106696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:03.184 [2024-11-17 01:00:55.106708] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:03.184 [2024-11-17 01:00:55.106725] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:03.184 [2024-11-17 01:00:55.106738] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:03.184 [2024-11-17 01:00:55.106747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:03.184 [2024-11-17 01:00:55.106760] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:03.184 [2024-11-17 01:00:55.106768] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:03.184 [2024-11-17 01:00:55.106781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.106789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:03.184 [2024-11-17 01:00:55.106798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:29:03.184 [2024-11-17 01:00:55.106806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.106892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.184 [2024-11-17 01:00:55.106909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:03.184 [2024-11-17 01:00:55.106918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:03.184 [2024-11-17 01:00:55.106929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.184 [2024-11-17 01:00:55.107032] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:03.184 [2024-11-17 01:00:55.107043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:03.184 [2024-11-17 01:00:55.107053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.184 [2024-11-17 01:00:55.107069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.184 [2024-11-17 01:00:55.107080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:03.184 [2024-11-17 01:00:55.107088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:03.184 [2024-11-17 01:00:55.107096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:03.184 [2024-11-17 01:00:55.107105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:03.184 [2024-11-17 01:00:55.107113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:03.184 [2024-11-17 01:00:55.107121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.184 [2024-11-17 01:00:55.107129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:03.184 [2024-11-17 01:00:55.107137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:03.184 [2024-11-17 01:00:55.107150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.184 [2024-11-17 01:00:55.107158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:03.184 [2024-11-17 01:00:55.107167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:03.184 [2024-11-17 01:00:55.107176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.184 [2024-11-17 01:00:55.107184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:03.184 [2024-11-17 01:00:55.107192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:03.184 [2024-11-17 01:00:55.107200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:03.185 [2024-11-17 01:00:55.107217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.185 [2024-11-17 01:00:55.107232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:03.185 [2024-11-17 01:00:55.107240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.185 [2024-11-17 01:00:55.107257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:03.185 [2024-11-17 01:00:55.107265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.185 [2024-11-17 01:00:55.107285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:03.185 [2024-11-17 01:00:55.107294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.185 [2024-11-17 01:00:55.107310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:03.185 [2024-11-17 01:00:55.107318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.185 [2024-11-17 01:00:55.107334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:03.185 [2024-11-17 01:00:55.107343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:03.185 [2024-11-17 01:00:55.107383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.185 [2024-11-17 01:00:55.107396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:03.185 [2024-11-17 01:00:55.107405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:03.185 [2024-11-17 01:00:55.107413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:03.185 [2024-11-17 01:00:55.107430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:03.185 [2024-11-17 01:00:55.107439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107447] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:03.185 [2024-11-17 01:00:55.107459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:03.185 [2024-11-17 01:00:55.107469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.185 [2024-11-17 01:00:55.107480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.185 [2024-11-17 01:00:55.107490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:03.185 [2024-11-17 01:00:55.107498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:03.185 [2024-11-17 01:00:55.107508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:03.185 [2024-11-17 01:00:55.107515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:03.185 [2024-11-17 01:00:55.107522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:03.185 [2024-11-17 01:00:55.107529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:03.185 [2024-11-17 01:00:55.107539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:03.185 [2024-11-17 01:00:55.107548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.185 [2024-11-17 01:00:55.107558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:03.185 [2024-11-17 01:00:55.107565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:03.185 [2024-11-17 01:00:55.107574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:03.185 [2024-11-17 01:00:55.107581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:03.185 [2024-11-17 01:00:55.107589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:03.185 [2024-11-17 01:00:55.107599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:03.185 [2024-11-17 01:00:55.107607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:03.185 [2024-11-17 01:00:55.107614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:03.185 [2024-11-17 01:00:55.107622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:03.185 [2024-11-17 01:00:55.107630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:03.185 [2024-11-17 01:00:55.107637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:03.185 [2024-11-17 01:00:55.107644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:03.185 [2024-11-17 01:00:55.107652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:03.185 [2024-11-17 01:00:55.107660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:03.185 [2024-11-17 01:00:55.107668] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:03.185 [2024-11-17 01:00:55.107677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.185 [2024-11-17 01:00:55.107686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:03.185 [2024-11-17 01:00:55.107693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:03.185 [2024-11-17 01:00:55.107701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:03.185 [2024-11-17 01:00:55.107710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:03.185 [2024-11-17 01:00:55.107717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.107728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:03.185 [2024-11-17 01:00:55.107736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:29:03.185 [2024-11-17 01:00:55.107743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.135939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.136027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.185 [2024-11-17 01:00:55.136067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.146 ms 00:29:03.185 [2024-11-17 01:00:55.136085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.136277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.136296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:03.185 [2024-11-17 01:00:55.136314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:29:03.185 [2024-11-17 01:00:55.136329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.145848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.145880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.185 [2024-11-17 01:00:55.145889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.381 ms 00:29:03.185 [2024-11-17 01:00:55.145897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.145932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.145940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.185 [2024-11-17 01:00:55.145948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:03.185 [2024-11-17 01:00:55.145959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.146310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.146338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.185 [2024-11-17 01:00:55.146347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:29:03.185 [2024-11-17 01:00:55.146367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.146489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.146534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.185 [2024-11-17 01:00:55.146547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:03.185 [2024-11-17 01:00:55.146555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.151349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.151382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.185 [2024-11-17 01:00:55.151396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:29:03.185 [2024-11-17 01:00:55.151407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.154253] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:03.185 [2024-11-17 01:00:55.154287] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:03.185 [2024-11-17 01:00:55.154302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.154310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:03.185 [2024-11-17 01:00:55.154318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:29:03.185 [2024-11-17 01:00:55.154326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.168868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.185 [2024-11-17 01:00:55.169024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:03.185 [2024-11-17 01:00:55.169048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.487 ms 00:29:03.185 [2024-11-17 01:00:55.169059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.185 [2024-11-17 01:00:55.171237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.171271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:03.186 [2024-11-17 01:00:55.171281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:29:03.186 [2024-11-17 01:00:55.171288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.173031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.173063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:03.186 [2024-11-17 01:00:55.173072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:29:03.186 [2024-11-17 01:00:55.173079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.173457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.173480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.186 [2024-11-17 01:00:55.173489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:29:03.186 [2024-11-17 01:00:55.173496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.190142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.190254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:03.186 [2024-11-17 01:00:55.190270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.629 ms 00:29:03.186 [2024-11-17 01:00:55.190278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.197804] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.186 [2024-11-17 01:00:55.200267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.200391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.186 [2024-11-17 01:00:55.200408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.947 ms 00:29:03.186 [2024-11-17 01:00:55.200428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.200481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.200493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:03.186 [2024-11-17 01:00:55.200505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:03.186 [2024-11-17 01:00:55.200512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.200580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.200589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.186 [2024-11-17 01:00:55.200598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:03.186 [2024-11-17 01:00:55.200609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.200630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.200638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.186 [2024-11-17 01:00:55.200646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:03.186 [2024-11-17 01:00:55.200654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.200685] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:03.186 [2024-11-17 01:00:55.200695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.200702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:03.186 [2024-11-17 01:00:55.200712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:03.186 [2024-11-17 01:00:55.200719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.204982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.205017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:03.186 [2024-11-17 01:00:55.205032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.245 ms 00:29:03.186 [2024-11-17 01:00:55.205039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.205107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.186 [2024-11-17 01:00:55.205116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:03.186 [2024-11-17 01:00:55.205125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:03.186 [2024-11-17 01:00:55.205132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.186 [2024-11-17 01:00:55.206026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.226 ms, result 0 00:29:04.572  [2024-11-17T01:00:57.576Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T01:00:58.518Z] Copying: 24/1024 [MB] (14 MBps) [2024-11-17T01:00:59.461Z] Copying: 52/1024 [MB] (27 MBps) [2024-11-17T01:01:00.403Z] Copying: 68/1024 [MB] (16 MBps) [2024-11-17T01:01:01.345Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-17T01:01:02.286Z] Copying: 97/1024 [MB] (17 MBps) [2024-11-17T01:01:03.228Z] Copying: 117/1024 [MB] (19 MBps) [2024-11-17T01:01:04.612Z] Copying: 130/1024 [MB] (12 MBps) [2024-11-17T01:01:05.553Z] Copying: 149/1024 [MB] (19 MBps) [2024-11-17T01:01:06.495Z] Copying: 165/1024 [MB] (15 MBps) [2024-11-17T01:01:07.438Z] Copying: 181/1024 [MB] (16 MBps) [2024-11-17T01:01:08.380Z] Copying: 201/1024 [MB] (19 MBps) [2024-11-17T01:01:09.321Z] Copying: 213/1024 [MB] (12 MBps) [2024-11-17T01:01:10.262Z] Copying: 224/1024 [MB] (10 MBps) [2024-11-17T01:01:11.642Z] Copying: 234/1024 [MB] (10 MBps) [2024-11-17T01:01:12.580Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-17T01:01:13.518Z] Copying: 261/1024 [MB] (16 MBps) [2024-11-17T01:01:14.457Z] Copying: 283/1024 [MB] (21 MBps) [2024-11-17T01:01:15.400Z] Copying: 305/1024 [MB] (22 MBps) [2024-11-17T01:01:16.342Z] Copying: 326/1024 [MB] (20 MBps) [2024-11-17T01:01:17.284Z] Copying: 344312/1048576 [kB] (10128 kBps) [2024-11-17T01:01:18.227Z] Copying: 358/1024 [MB] (22 MBps) [2024-11-17T01:01:19.610Z] Copying: 380/1024 [MB] (22 MBps) [2024-11-17T01:01:20.548Z] Copying: 393/1024 [MB] (12 MBps) [2024-11-17T01:01:21.489Z] Copying: 415/1024 [MB] (22 MBps) [2024-11-17T01:01:22.529Z] Copying: 431/1024 [MB] (16 MBps) [2024-11-17T01:01:23.471Z] Copying: 453/1024 [MB] (22 MBps) [2024-11-17T01:01:24.415Z] Copying: 471/1024 [MB] (17 MBps) [2024-11-17T01:01:25.357Z] Copying: 486/1024 [MB] (15 MBps) [2024-11-17T01:01:26.299Z] Copying: 499/1024 [MB] (12 MBps) [2024-11-17T01:01:27.244Z] Copying: 512/1024 [MB] (12 MBps) [2024-11-17T01:01:28.634Z] Copying: 528/1024 [MB] (15 MBps) [2024-11-17T01:01:29.580Z] Copying: 540/1024 [MB] (12 MBps) [2024-11-17T01:01:30.522Z] Copying: 554/1024 [MB] (13 MBps) [2024-11-17T01:01:31.467Z] Copying: 565/1024 [MB] (11 MBps) [2024-11-17T01:01:32.410Z] Copying: 578/1024 [MB] (13 MBps) [2024-11-17T01:01:33.355Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-17T01:01:34.300Z] Copying: 600/1024 [MB] (10 MBps) [2024-11-17T01:01:35.246Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-17T01:01:36.631Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-17T01:01:37.574Z] Copying: 632/1024 [MB] (10 MBps) [2024-11-17T01:01:38.511Z] Copying: 642/1024 [MB] (10 MBps) [2024-11-17T01:01:39.456Z] Copying: 674/1024 [MB] (31 MBps) [2024-11-17T01:01:40.399Z] Copying: 696/1024 [MB] (22 MBps) [2024-11-17T01:01:41.344Z] Copying: 711/1024 [MB] (14 MBps) [2024-11-17T01:01:42.287Z] Copying: 730/1024 [MB] (19 MBps) [2024-11-17T01:01:43.231Z] Copying: 751/1024 [MB] (20 MBps) [2024-11-17T01:01:44.618Z] Copying: 772/1024 [MB] (21 MBps) [2024-11-17T01:01:45.560Z] Copying: 790/1024 [MB] (17 MBps) [2024-11-17T01:01:46.501Z] Copying: 811/1024 [MB] (21 MBps) [2024-11-17T01:01:47.445Z] Copying: 828/1024 [MB] (16 MBps) [2024-11-17T01:01:48.389Z] Copying: 845/1024 [MB] (17 MBps) [2024-11-17T01:01:49.337Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-17T01:01:50.280Z] Copying: 873/1024 [MB] (15 MBps) [2024-11-17T01:01:51.285Z] Copying: 889/1024 [MB] (15 MBps) [2024-11-17T01:01:52.250Z] Copying: 902/1024 [MB] (12 MBps) [2024-11-17T01:01:53.636Z] Copying: 913/1024 [MB] (10 MBps) [2024-11-17T01:01:54.580Z] Copying: 923/1024 [MB] (10 MBps) [2024-11-17T01:01:55.540Z] Copying: 933/1024 [MB] (10 MBps) [2024-11-17T01:01:56.485Z] Copying: 943/1024 [MB] (10 MBps) [2024-11-17T01:01:57.430Z] Copying: 954/1024 [MB] (10 MBps) [2024-11-17T01:01:58.372Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-17T01:01:59.313Z] Copying: 975/1024 [MB] (11 MBps) [2024-11-17T01:02:00.255Z] Copying: 993/1024 [MB] (18 MBps) [2024-11-17T01:02:01.201Z] Copying: 1014/1024 [MB] (20 MBps) [2024-11-17T01:02:01.201Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 01:02:00.831016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.138 [2024-11-17 01:02:00.831077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:09.138 [2024-11-17 01:02:00.831093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:09.138 [2024-11-17 01:02:00.831102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.139 [2024-11-17 01:02:00.831123] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:09.139 [2024-11-17 01:02:00.831926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.139 [2024-11-17 01:02:00.831953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:09.139 [2024-11-17 01:02:00.831966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:30:09.139 [2024-11-17 01:02:00.831976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.139 [2024-11-17 01:02:00.834021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.139 [2024-11-17 01:02:00.834068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:09.139 [2024-11-17 01:02:00.834078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:30:09.139 [2024-11-17 01:02:00.834087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.139 [2024-11-17 01:02:00.834113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.139 [2024-11-17 01:02:00.834128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:09.139 [2024-11-17 01:02:00.834137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:09.139 [2024-11-17 01:02:00.834145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.139 [2024-11-17 01:02:00.834200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.139 [2024-11-17 01:02:00.834209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:09.139 [2024-11-17 01:02:00.834217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:09.139 [2024-11-17 01:02:00.834225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.139 [2024-11-17 01:02:00.834238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:09.139 [2024-11-17 01:02:00.834250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:09.139 [2024-11-17 01:02:00.834850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.834999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:09.140 [2024-11-17 01:02:00.835065] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:09.140 [2024-11-17 01:02:00.835076] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dafe0f70-ba94-4e9f-86ae-6e462132ec15 00:30:09.140 [2024-11-17 01:02:00.835085] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:09.140 [2024-11-17 01:02:00.835092] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:09.140 [2024-11-17 01:02:00.835099] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:09.140 [2024-11-17 01:02:00.835107] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:09.140 [2024-11-17 01:02:00.835115] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:09.140 [2024-11-17 01:02:00.835122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:09.140 [2024-11-17 01:02:00.835129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:09.140 [2024-11-17 01:02:00.835136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:09.140 [2024-11-17 01:02:00.835142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:09.140 [2024-11-17 01:02:00.835149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.140 [2024-11-17 01:02:00.835156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:09.140 [2024-11-17 01:02:00.835164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:30:09.140 [2024-11-17 01:02:00.835171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.837595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.140 [2024-11-17 01:02:00.837627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:09.140 [2024-11-17 01:02:00.837644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:30:09.140 [2024-11-17 01:02:00.837652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.837774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.140 [2024-11-17 01:02:00.837783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:09.140 [2024-11-17 01:02:00.837792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:30:09.140 [2024-11-17 01:02:00.837803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.844866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.845040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:09.140 [2024-11-17 01:02:00.845107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.845131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.845209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.845230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:09.140 [2024-11-17 01:02:00.845251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.845276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.845325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.845419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:09.140 [2024-11-17 01:02:00.845445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.845465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.845494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.845515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:09.140 [2024-11-17 01:02:00.845575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.845599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.859453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.859634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:09.140 [2024-11-17 01:02:00.859692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.859716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.870525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.870718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:09.140 [2024-11-17 01:02:00.870737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.870753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.870804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.870814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:09.140 [2024-11-17 01:02:00.870828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.870837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.870871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.870885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:09.140 [2024-11-17 01:02:00.870895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.870903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.870964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.870975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:09.140 [2024-11-17 01:02:00.870983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.870991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.871021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.871031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:09.140 [2024-11-17 01:02:00.871039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.871048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.871087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.871099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:09.140 [2024-11-17 01:02:00.871108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.871120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.871167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.140 [2024-11-17 01:02:00.871178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:09.140 [2024-11-17 01:02:00.871186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.140 [2024-11-17 01:02:00.871194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.140 [2024-11-17 01:02:00.871338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.279 ms, result 0 00:30:09.140 00:30:09.140 00:30:09.140 01:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:09.402 [2024-11-17 01:02:01.224517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:09.402 [2024-11-17 01:02:01.224670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94338 ] 00:30:09.402 [2024-11-17 01:02:01.377929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.402 [2024-11-17 01:02:01.432136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.664 [2024-11-17 01:02:01.546575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:09.664 [2024-11-17 01:02:01.546653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:09.664 [2024-11-17 01:02:01.707824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.707881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:09.664 [2024-11-17 01:02:01.707899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:09.664 [2024-11-17 01:02:01.707908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.707962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.707973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:09.664 [2024-11-17 01:02:01.707986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:09.664 [2024-11-17 01:02:01.707994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.708018] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:09.664 [2024-11-17 01:02:01.708293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:09.664 [2024-11-17 01:02:01.708310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.708318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:09.664 [2024-11-17 01:02:01.708331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:30:09.664 [2024-11-17 01:02:01.708341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.708649] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:09.664 [2024-11-17 01:02:01.708676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.708685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:09.664 [2024-11-17 01:02:01.708695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:09.664 [2024-11-17 01:02:01.708703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.708766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.708779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:09.664 [2024-11-17 01:02:01.708787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:09.664 [2024-11-17 01:02:01.708795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.709281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.709309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:09.664 [2024-11-17 01:02:01.709318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:30:09.664 [2024-11-17 01:02:01.709325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.709436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.709449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:09.664 [2024-11-17 01:02:01.709457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:09.664 [2024-11-17 01:02:01.709465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.709490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.709499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:09.664 [2024-11-17 01:02:01.709508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:09.664 [2024-11-17 01:02:01.709515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.709535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:09.664 [2024-11-17 01:02:01.711599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.711772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:09.664 [2024-11-17 01:02:01.711797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:30:09.664 [2024-11-17 01:02:01.711806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.711842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.664 [2024-11-17 01:02:01.711852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:09.664 [2024-11-17 01:02:01.711861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:09.664 [2024-11-17 01:02:01.711869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.664 [2024-11-17 01:02:01.711921] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:09.665 [2024-11-17 01:02:01.711945] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:09.665 [2024-11-17 01:02:01.711985] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:09.665 [2024-11-17 01:02:01.712001] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:09.665 [2024-11-17 01:02:01.712106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:09.665 [2024-11-17 01:02:01.712118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:09.665 [2024-11-17 01:02:01.712135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:09.665 [2024-11-17 01:02:01.712150] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712159] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712173] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:09.665 [2024-11-17 01:02:01.712181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:09.665 [2024-11-17 01:02:01.712188] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:09.665 [2024-11-17 01:02:01.712199] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:09.665 [2024-11-17 01:02:01.712211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.665 [2024-11-17 01:02:01.712219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:09.665 [2024-11-17 01:02:01.712226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:30:09.665 [2024-11-17 01:02:01.712237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.665 [2024-11-17 01:02:01.712323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.665 [2024-11-17 01:02:01.712332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:09.665 [2024-11-17 01:02:01.712344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:09.665 [2024-11-17 01:02:01.712370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.665 [2024-11-17 01:02:01.712477] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:09.665 [2024-11-17 01:02:01.712493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:09.665 [2024-11-17 01:02:01.712501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:09.665 [2024-11-17 01:02:01.712532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:09.665 [2024-11-17 01:02:01.712554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:09.665 [2024-11-17 01:02:01.712568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:09.665 [2024-11-17 01:02:01.712575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:09.665 [2024-11-17 01:02:01.712581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:09.665 [2024-11-17 01:02:01.712588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:09.665 [2024-11-17 01:02:01.712595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:09.665 [2024-11-17 01:02:01.712601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:09.665 [2024-11-17 01:02:01.712614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:09.665 [2024-11-17 01:02:01.712641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:09.665 [2024-11-17 01:02:01.712664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:09.665 [2024-11-17 01:02:01.712684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:09.665 [2024-11-17 01:02:01.712706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:09.665 [2024-11-17 01:02:01.712727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:09.665 [2024-11-17 01:02:01.712741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:09.665 [2024-11-17 01:02:01.712752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:09.665 [2024-11-17 01:02:01.712759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:09.665 [2024-11-17 01:02:01.712766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:09.665 [2024-11-17 01:02:01.712773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:09.665 [2024-11-17 01:02:01.712780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:09.665 [2024-11-17 01:02:01.712793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:09.665 [2024-11-17 01:02:01.712800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712807] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:09.665 [2024-11-17 01:02:01.712815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:09.665 [2024-11-17 01:02:01.712826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:09.665 [2024-11-17 01:02:01.712843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:09.665 [2024-11-17 01:02:01.712849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:09.665 [2024-11-17 01:02:01.712856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:09.665 [2024-11-17 01:02:01.712863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:09.665 [2024-11-17 01:02:01.712872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:09.665 [2024-11-17 01:02:01.712878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:09.665 [2024-11-17 01:02:01.712888] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:09.665 [2024-11-17 01:02:01.712898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:09.665 [2024-11-17 01:02:01.712907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:09.665 [2024-11-17 01:02:01.712916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:09.665 [2024-11-17 01:02:01.712923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:09.665 [2024-11-17 01:02:01.712945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:09.665 [2024-11-17 01:02:01.712952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:09.665 [2024-11-17 01:02:01.712960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:09.665 [2024-11-17 01:02:01.712967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:09.665 [2024-11-17 01:02:01.712974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:09.665 [2024-11-17 01:02:01.712981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:09.665 [2024-11-17 01:02:01.712989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:09.665 [2024-11-17 01:02:01.712996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:09.665 [2024-11-17 01:02:01.713004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:09.665 [2024-11-17 01:02:01.713014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:09.665 [2024-11-17 01:02:01.713021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:09.665 [2024-11-17 01:02:01.713029] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:09.665 [2024-11-17 01:02:01.713040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:09.665 [2024-11-17 01:02:01.713049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:09.665 [2024-11-17 01:02:01.713058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:09.665 [2024-11-17 01:02:01.713067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:09.665 [2024-11-17 01:02:01.713076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:09.665 [2024-11-17 01:02:01.713083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.666 [2024-11-17 01:02:01.713092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:09.666 [2024-11-17 01:02:01.713100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:30:09.666 [2024-11-17 01:02:01.713108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.731767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.731995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:09.928 [2024-11-17 01:02:01.732094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.615 ms 00:30:09.928 [2024-11-17 01:02:01.732131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.732283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.732568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:09.928 [2024-11-17 01:02:01.732625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:09.928 [2024-11-17 01:02:01.732654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.744798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.744964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:09.928 [2024-11-17 01:02:01.745040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.910 ms 00:30:09.928 [2024-11-17 01:02:01.745064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.745112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.745133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:09.928 [2024-11-17 01:02:01.745153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:09.928 [2024-11-17 01:02:01.745172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.745288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.745317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:09.928 [2024-11-17 01:02:01.745344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:09.928 [2024-11-17 01:02:01.745447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.745601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.745624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:09.928 [2024-11-17 01:02:01.745644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:30:09.928 [2024-11-17 01:02:01.745666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.752396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.752532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:09.928 [2024-11-17 01:02:01.752608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.693 ms 00:30:09.928 [2024-11-17 01:02:01.752635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.752769] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:09.928 [2024-11-17 01:02:01.752854] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:09.928 [2024-11-17 01:02:01.752952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.752985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:09.928 [2024-11-17 01:02:01.753006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:30:09.928 [2024-11-17 01:02:01.753026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.765514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.765648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:09.928 [2024-11-17 01:02:01.765713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.450 ms 00:30:09.928 [2024-11-17 01:02:01.765736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.765884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.765909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:09.928 [2024-11-17 01:02:01.765959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:30:09.928 [2024-11-17 01:02:01.765987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.766057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.766081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:09.928 [2024-11-17 01:02:01.766106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:09.928 [2024-11-17 01:02:01.766125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.766485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.766694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:09.928 [2024-11-17 01:02:01.766765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:30:09.928 [2024-11-17 01:02:01.766787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.766851] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:09.928 [2024-11-17 01:02:01.766887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.766906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:09.928 [2024-11-17 01:02:01.766960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:09.928 [2024-11-17 01:02:01.766993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.776280] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:09.928 [2024-11-17 01:02:01.776561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.776743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:09.928 [2024-11-17 01:02:01.776830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.534 ms 00:30:09.928 [2024-11-17 01:02:01.776853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.779304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.779458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:09.928 [2024-11-17 01:02:01.779526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:30:09.928 [2024-11-17 01:02:01.779548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.779704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.779737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:09.928 [2024-11-17 01:02:01.779796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:09.928 [2024-11-17 01:02:01.779943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.780018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.780103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:09.928 [2024-11-17 01:02:01.780128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:09.928 [2024-11-17 01:02:01.780176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.780234] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:09.928 [2024-11-17 01:02:01.780264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.780287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:09.928 [2024-11-17 01:02:01.780308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:09.928 [2024-11-17 01:02:01.780382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.928 [2024-11-17 01:02:01.786235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.928 [2024-11-17 01:02:01.786408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:09.929 [2024-11-17 01:02:01.786793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.811 ms 00:30:09.929 [2024-11-17 01:02:01.786819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.929 [2024-11-17 01:02:01.786919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.929 [2024-11-17 01:02:01.786932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:09.929 [2024-11-17 01:02:01.786942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:09.929 [2024-11-17 01:02:01.786950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.929 [2024-11-17 01:02:01.788311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 80.044 ms, result 0 00:30:11.315  [2024-11-17T01:02:04.325Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-17T01:02:05.270Z] Copying: 35/1024 [MB] (19 MBps) [2024-11-17T01:02:06.214Z] Copying: 50/1024 [MB] (15 MBps) [2024-11-17T01:02:07.158Z] Copying: 67/1024 [MB] (16 MBps) [2024-11-17T01:02:08.102Z] Copying: 83/1024 [MB] (15 MBps) [2024-11-17T01:02:09.046Z] Copying: 96/1024 [MB] (13 MBps) [2024-11-17T01:02:09.999Z] Copying: 110/1024 [MB] (14 MBps) [2024-11-17T01:02:11.386Z] Copying: 124/1024 [MB] (14 MBps) [2024-11-17T01:02:12.327Z] Copying: 143/1024 [MB] (18 MBps) [2024-11-17T01:02:13.272Z] Copying: 156/1024 [MB] (13 MBps) [2024-11-17T01:02:14.213Z] Copying: 169/1024 [MB] (13 MBps) [2024-11-17T01:02:15.157Z] Copying: 192/1024 [MB] (22 MBps) [2024-11-17T01:02:16.099Z] Copying: 203/1024 [MB] (10 MBps) [2024-11-17T01:02:17.040Z] Copying: 213/1024 [MB] (10 MBps) [2024-11-17T01:02:17.985Z] Copying: 224/1024 [MB] (10 MBps) [2024-11-17T01:02:19.421Z] Copying: 234/1024 [MB] (10 MBps) [2024-11-17T01:02:20.016Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-17T01:02:21.405Z] Copying: 255/1024 [MB] (10 MBps) [2024-11-17T01:02:21.992Z] Copying: 266/1024 [MB] (10 MBps) [2024-11-17T01:02:23.380Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-17T01:02:24.325Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-17T01:02:25.270Z] Copying: 298/1024 [MB] (10 MBps) [2024-11-17T01:02:26.217Z] Copying: 309/1024 [MB] (10 MBps) [2024-11-17T01:02:27.163Z] Copying: 320/1024 [MB] (11 MBps) [2024-11-17T01:02:28.109Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-17T01:02:29.052Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-17T01:02:29.995Z] Copying: 352/1024 [MB] (10 MBps) [2024-11-17T01:02:31.386Z] Copying: 365/1024 [MB] (13 MBps) [2024-11-17T01:02:32.333Z] Copying: 378/1024 [MB] (12 MBps) [2024-11-17T01:02:33.277Z] Copying: 394/1024 [MB] (16 MBps) [2024-11-17T01:02:34.223Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-17T01:02:35.168Z] Copying: 418/1024 [MB] (12 MBps) [2024-11-17T01:02:36.110Z] Copying: 433/1024 [MB] (15 MBps) [2024-11-17T01:02:37.056Z] Copying: 450/1024 [MB] (16 MBps) [2024-11-17T01:02:38.002Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-17T01:02:39.391Z] Copying: 479/1024 [MB] (18 MBps) [2024-11-17T01:02:40.336Z] Copying: 492/1024 [MB] (12 MBps) [2024-11-17T01:02:41.278Z] Copying: 510/1024 [MB] (17 MBps) [2024-11-17T01:02:42.221Z] Copying: 528/1024 [MB] (18 MBps) [2024-11-17T01:02:43.165Z] Copying: 554/1024 [MB] (26 MBps) [2024-11-17T01:02:44.110Z] Copying: 575/1024 [MB] (20 MBps) [2024-11-17T01:02:45.055Z] Copying: 589/1024 [MB] (14 MBps) [2024-11-17T01:02:46.000Z] Copying: 600/1024 [MB] (10 MBps) [2024-11-17T01:02:47.390Z] Copying: 611/1024 [MB] (11 MBps) [2024-11-17T01:02:48.339Z] Copying: 627/1024 [MB] (15 MBps) [2024-11-17T01:02:48.982Z] Copying: 644/1024 [MB] (17 MBps) [2024-11-17T01:02:50.371Z] Copying: 655/1024 [MB] (10 MBps) [2024-11-17T01:02:51.316Z] Copying: 666/1024 [MB] (10 MBps) [2024-11-17T01:02:52.261Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-17T01:02:53.204Z] Copying: 688/1024 [MB] (11 MBps) [2024-11-17T01:02:54.148Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-17T01:02:55.092Z] Copying: 709/1024 [MB] (10 MBps) [2024-11-17T01:02:56.034Z] Copying: 720/1024 [MB] (10 MBps) [2024-11-17T01:02:56.978Z] Copying: 731/1024 [MB] (10 MBps) [2024-11-17T01:02:58.367Z] Copying: 742/1024 [MB] (10 MBps) [2024-11-17T01:02:59.312Z] Copying: 753/1024 [MB] (11 MBps) [2024-11-17T01:03:00.254Z] Copying: 764/1024 [MB] (11 MBps) [2024-11-17T01:03:01.198Z] Copying: 775/1024 [MB] (11 MBps) [2024-11-17T01:03:02.139Z] Copying: 786/1024 [MB] (10 MBps) [2024-11-17T01:03:03.081Z] Copying: 797/1024 [MB] (10 MBps) [2024-11-17T01:03:04.023Z] Copying: 819/1024 [MB] (21 MBps) [2024-11-17T01:03:05.400Z] Copying: 830/1024 [MB] (11 MBps) [2024-11-17T01:03:06.342Z] Copying: 847/1024 [MB] (16 MBps) [2024-11-17T01:03:07.287Z] Copying: 858/1024 [MB] (10 MBps) [2024-11-17T01:03:08.231Z] Copying: 868/1024 [MB] (10 MBps) [2024-11-17T01:03:09.171Z] Copying: 881/1024 [MB] (12 MBps) [2024-11-17T01:03:10.109Z] Copying: 893/1024 [MB] (11 MBps) [2024-11-17T01:03:11.052Z] Copying: 906/1024 [MB] (13 MBps) [2024-11-17T01:03:11.993Z] Copying: 924/1024 [MB] (18 MBps) [2024-11-17T01:03:13.375Z] Copying: 939/1024 [MB] (14 MBps) [2024-11-17T01:03:14.318Z] Copying: 951/1024 [MB] (11 MBps) [2024-11-17T01:03:15.261Z] Copying: 963/1024 [MB] (12 MBps) [2024-11-17T01:03:16.206Z] Copying: 981/1024 [MB] (18 MBps) [2024-11-17T01:03:17.149Z] Copying: 994/1024 [MB] (12 MBps) [2024-11-17T01:03:18.144Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-17T01:03:18.144Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-17 01:03:18.097014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.081 [2024-11-17 01:03:18.097106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:26.081 [2024-11-17 01:03:18.097125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:26.081 [2024-11-17 01:03:18.097137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.081 [2024-11-17 01:03:18.097166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:26.081 [2024-11-17 01:03:18.097992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.081 [2024-11-17 01:03:18.098025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:26.081 [2024-11-17 01:03:18.098039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:31:26.081 [2024-11-17 01:03:18.098052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.081 [2024-11-17 01:03:18.098379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.081 [2024-11-17 01:03:18.098399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:26.081 [2024-11-17 01:03:18.098411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:31:26.081 [2024-11-17 01:03:18.098421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.081 [2024-11-17 01:03:18.098457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.081 [2024-11-17 01:03:18.098475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:26.081 [2024-11-17 01:03:18.098486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:26.081 [2024-11-17 01:03:18.098496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.081 [2024-11-17 01:03:18.098566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.081 [2024-11-17 01:03:18.098578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:26.081 [2024-11-17 01:03:18.098589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:26.081 [2024-11-17 01:03:18.098598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.081 [2024-11-17 01:03:18.098616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:26.081 [2024-11-17 01:03:18.098632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.098716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.099651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.099731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.099774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.099813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.099854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.099892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.099931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:26.081 [2024-11-17 01:03:18.100928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.100939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.100949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.100972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.100982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.100992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:26.082 [2024-11-17 01:03:18.101498] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:26.082 [2024-11-17 01:03:18.101514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dafe0f70-ba94-4e9f-86ae-6e462132ec15 00:31:26.082 [2024-11-17 01:03:18.101524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:26.082 [2024-11-17 01:03:18.101535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:26.082 [2024-11-17 01:03:18.101546] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:26.082 [2024-11-17 01:03:18.101564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:26.082 [2024-11-17 01:03:18.101578] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:26.082 [2024-11-17 01:03:18.101588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:26.082 [2024-11-17 01:03:18.101598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:26.082 [2024-11-17 01:03:18.101607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:26.082 [2024-11-17 01:03:18.101616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:26.082 [2024-11-17 01:03:18.101628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.082 [2024-11-17 01:03:18.101638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:26.082 [2024-11-17 01:03:18.101650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:31:26.082 [2024-11-17 01:03:18.101660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.082 [2024-11-17 01:03:18.104754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.082 [2024-11-17 01:03:18.104917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:26.082 [2024-11-17 01:03:18.105029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:31:26.082 [2024-11-17 01:03:18.105062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.082 [2024-11-17 01:03:18.105272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.082 [2024-11-17 01:03:18.105347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:26.082 [2024-11-17 01:03:18.105434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:31:26.082 [2024-11-17 01:03:18.105465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.082 [2024-11-17 01:03:18.112243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.082 [2024-11-17 01:03:18.112423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:26.082 [2024-11-17 01:03:18.112524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.082 [2024-11-17 01:03:18.112550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.082 [2024-11-17 01:03:18.112649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.082 [2024-11-17 01:03:18.112674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:26.082 [2024-11-17 01:03:18.112739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.112772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.112853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.112892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:26.083 [2024-11-17 01:03:18.112913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.113113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.113240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.113296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:26.083 [2024-11-17 01:03:18.113345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.113522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.127659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.127877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:26.083 [2024-11-17 01:03:18.127897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.127906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.140018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.140085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:26.083 [2024-11-17 01:03:18.140098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.140112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.140169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.140179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:26.083 [2024-11-17 01:03:18.140193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.140201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.140240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.140250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:26.083 [2024-11-17 01:03:18.140259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.140268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.140339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.140350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:26.083 [2024-11-17 01:03:18.140388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.140397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.140426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.140445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:26.083 [2024-11-17 01:03:18.140455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.140463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.140516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.140527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:26.083 [2024-11-17 01:03:18.140536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.140544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.083 [2024-11-17 01:03:18.140590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.083 [2024-11-17 01:03:18.140602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:26.083 [2024-11-17 01:03:18.140610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.083 [2024-11-17 01:03:18.140618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.343 [2024-11-17 01:03:18.140765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.711 ms, result 0 00:31:26.343 00:31:26.343 00:31:26.343 01:03:18 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:28.888 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:28.888 01:03:20 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:28.888 [2024-11-17 01:03:20.573655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:28.888 [2024-11-17 01:03:20.573786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95129 ] 00:31:28.888 [2024-11-17 01:03:20.724391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.888 [2024-11-17 01:03:20.763140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.888 [2024-11-17 01:03:20.879569] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:28.888 [2024-11-17 01:03:20.879650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:29.152 [2024-11-17 01:03:21.041966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.042210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:29.152 [2024-11-17 01:03:21.042247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:29.152 [2024-11-17 01:03:21.042260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.042329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.042342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:29.152 [2024-11-17 01:03:21.042351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:29.152 [2024-11-17 01:03:21.042391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.042423] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:29.152 [2024-11-17 01:03:21.042700] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:29.152 [2024-11-17 01:03:21.042721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.042730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:29.152 [2024-11-17 01:03:21.042746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:31:29.152 [2024-11-17 01:03:21.042759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.043068] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:29.152 [2024-11-17 01:03:21.043097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.043107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:29.152 [2024-11-17 01:03:21.043117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:29.152 [2024-11-17 01:03:21.043127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.043186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.043205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:29.152 [2024-11-17 01:03:21.043213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:29.152 [2024-11-17 01:03:21.043221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.043493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.043508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:29.152 [2024-11-17 01:03:21.043524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:31:29.152 [2024-11-17 01:03:21.043533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.043617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.043632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:29.152 [2024-11-17 01:03:21.043641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:29.152 [2024-11-17 01:03:21.043649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.043681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.043693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:29.152 [2024-11-17 01:03:21.043709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:29.152 [2024-11-17 01:03:21.043719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.043744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:29.152 [2024-11-17 01:03:21.045917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.045957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:29.152 [2024-11-17 01:03:21.045972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:31:29.152 [2024-11-17 01:03:21.045986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.046021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.152 [2024-11-17 01:03:21.046031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:29.152 [2024-11-17 01:03:21.046041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:29.152 [2024-11-17 01:03:21.046050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.152 [2024-11-17 01:03:21.046099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:29.152 [2024-11-17 01:03:21.046124] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:29.152 [2024-11-17 01:03:21.046166] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:29.152 [2024-11-17 01:03:21.046184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:29.152 [2024-11-17 01:03:21.046291] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:29.152 [2024-11-17 01:03:21.046311] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:29.152 [2024-11-17 01:03:21.046324] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:29.152 [2024-11-17 01:03:21.046338] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046348] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:29.153 [2024-11-17 01:03:21.046401] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:29.153 [2024-11-17 01:03:21.046409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:29.153 [2024-11-17 01:03:21.046417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:29.153 [2024-11-17 01:03:21.046427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.153 [2024-11-17 01:03:21.046435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:29.153 [2024-11-17 01:03:21.046443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:31:29.153 [2024-11-17 01:03:21.046451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.153 [2024-11-17 01:03:21.046544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.153 [2024-11-17 01:03:21.046560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:29.153 [2024-11-17 01:03:21.046571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:31:29.153 [2024-11-17 01:03:21.046581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.153 [2024-11-17 01:03:21.046682] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:29.153 [2024-11-17 01:03:21.046695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:29.153 [2024-11-17 01:03:21.046704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:29.153 [2024-11-17 01:03:21.046740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:29.153 [2024-11-17 01:03:21.046766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:29.153 [2024-11-17 01:03:21.046780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:29.153 [2024-11-17 01:03:21.046787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:29.153 [2024-11-17 01:03:21.046799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:29.153 [2024-11-17 01:03:21.046806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:29.153 [2024-11-17 01:03:21.046813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:29.153 [2024-11-17 01:03:21.046820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:29.153 [2024-11-17 01:03:21.046833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:29.153 [2024-11-17 01:03:21.046857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:29.153 [2024-11-17 01:03:21.046877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:29.153 [2024-11-17 01:03:21.046899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:29.153 [2024-11-17 01:03:21.046919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.153 [2024-11-17 01:03:21.046933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:29.153 [2024-11-17 01:03:21.046940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:29.153 [2024-11-17 01:03:21.046947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:29.153 [2024-11-17 01:03:21.046954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:29.153 [2024-11-17 01:03:21.046967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:29.153 [2024-11-17 01:03:21.046974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:29.153 [2024-11-17 01:03:21.046980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:29.153 [2024-11-17 01:03:21.046987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:29.153 [2024-11-17 01:03:21.046993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.153 [2024-11-17 01:03:21.047000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:29.153 [2024-11-17 01:03:21.047009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:29.153 [2024-11-17 01:03:21.047017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.153 [2024-11-17 01:03:21.047025] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:29.153 [2024-11-17 01:03:21.047034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:29.153 [2024-11-17 01:03:21.047046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:29.153 [2024-11-17 01:03:21.047053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.153 [2024-11-17 01:03:21.047061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:29.153 [2024-11-17 01:03:21.047068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:29.153 [2024-11-17 01:03:21.047075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:29.153 [2024-11-17 01:03:21.047082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:29.153 [2024-11-17 01:03:21.047091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:29.153 [2024-11-17 01:03:21.047099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:29.153 [2024-11-17 01:03:21.047109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:29.153 [2024-11-17 01:03:21.047121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:29.153 [2024-11-17 01:03:21.047130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:29.153 [2024-11-17 01:03:21.047138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:29.153 [2024-11-17 01:03:21.047145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:29.153 [2024-11-17 01:03:21.047153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:29.153 [2024-11-17 01:03:21.047160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:29.153 [2024-11-17 01:03:21.047167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:29.153 [2024-11-17 01:03:21.047176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:29.153 [2024-11-17 01:03:21.047185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:29.153 [2024-11-17 01:03:21.047192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:29.153 [2024-11-17 01:03:21.047200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:29.153 [2024-11-17 01:03:21.047207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:29.153 [2024-11-17 01:03:21.047214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:29.153 [2024-11-17 01:03:21.047224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:29.153 [2024-11-17 01:03:21.047231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:29.153 [2024-11-17 01:03:21.047240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:29.153 [2024-11-17 01:03:21.047250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:29.154 [2024-11-17 01:03:21.047259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:29.154 [2024-11-17 01:03:21.047266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:29.154 [2024-11-17 01:03:21.047274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:29.154 [2024-11-17 01:03:21.047281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:29.154 [2024-11-17 01:03:21.047289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.047298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:29.154 [2024-11-17 01:03:21.047307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:31:29.154 [2024-11-17 01:03:21.047315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.064487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.064679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:29.154 [2024-11-17 01:03:21.064707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.114 ms 00:31:29.154 [2024-11-17 01:03:21.064717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.064815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.064825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:29.154 [2024-11-17 01:03:21.064840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:29.154 [2024-11-17 01:03:21.064849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.077506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.077550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:29.154 [2024-11-17 01:03:21.077565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.588 ms 00:31:29.154 [2024-11-17 01:03:21.077573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.077609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.077623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:29.154 [2024-11-17 01:03:21.077632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:29.154 [2024-11-17 01:03:21.077641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.077735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.077749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:29.154 [2024-11-17 01:03:21.077757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:29.154 [2024-11-17 01:03:21.077777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.077902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.077914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:29.154 [2024-11-17 01:03:21.077922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:31:29.154 [2024-11-17 01:03:21.077934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.084857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.084902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:29.154 [2024-11-17 01:03:21.084914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.899 ms 00:31:29.154 [2024-11-17 01:03:21.084930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.085060] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:29.154 [2024-11-17 01:03:21.085075] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:29.154 [2024-11-17 01:03:21.085086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.085104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:29.154 [2024-11-17 01:03:21.085114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:29.154 [2024-11-17 01:03:21.085121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.097450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.097494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:29.154 [2024-11-17 01:03:21.097505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.311 ms 00:31:29.154 [2024-11-17 01:03:21.097514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.097653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.097664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:29.154 [2024-11-17 01:03:21.097673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:31:29.154 [2024-11-17 01:03:21.097682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.097734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.097743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:29.154 [2024-11-17 01:03:21.097752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:29.154 [2024-11-17 01:03:21.097763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.098074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.098086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:29.154 [2024-11-17 01:03:21.098095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:31:29.154 [2024-11-17 01:03:21.098103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.098120] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:29.154 [2024-11-17 01:03:21.098132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.098140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:29.154 [2024-11-17 01:03:21.098154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:29.154 [2024-11-17 01:03:21.098165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.107412] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:29.154 [2024-11-17 01:03:21.107703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.107720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:29.154 [2024-11-17 01:03:21.107731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.519 ms 00:31:29.154 [2024-11-17 01:03:21.107739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.110252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.110288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:29.154 [2024-11-17 01:03:21.110299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:31:29.154 [2024-11-17 01:03:21.110312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.110433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.110446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:29.154 [2024-11-17 01:03:21.110456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:29.154 [2024-11-17 01:03:21.110466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.110492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.110505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:29.154 [2024-11-17 01:03:21.110514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:29.154 [2024-11-17 01:03:21.110527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.110560] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:29.154 [2024-11-17 01:03:21.110574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.110585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:29.154 [2024-11-17 01:03:21.110593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:29.154 [2024-11-17 01:03:21.110602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.116685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.116732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:29.154 [2024-11-17 01:03:21.116755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.061 ms 00:31:29.154 [2024-11-17 01:03:21.116764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.116854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.154 [2024-11-17 01:03:21.116865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:29.154 [2024-11-17 01:03:21.116873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:29.154 [2024-11-17 01:03:21.116882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.154 [2024-11-17 01:03:21.118053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 75.641 ms, result 0 00:31:30.098  [2024-11-17T01:03:23.546Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T01:03:24.487Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-17T01:03:25.426Z] Copying: 31/1024 [MB] (10 MBps) [2024-11-17T01:03:26.367Z] Copying: 41/1024 [MB] (10 MBps) [2024-11-17T01:03:27.310Z] Copying: 56/1024 [MB] (14 MBps) [2024-11-17T01:03:28.248Z] Copying: 74/1024 [MB] (17 MBps) [2024-11-17T01:03:29.190Z] Copying: 87/1024 [MB] (13 MBps) [2024-11-17T01:03:30.569Z] Copying: 105/1024 [MB] (18 MBps) [2024-11-17T01:03:31.134Z] Copying: 120/1024 [MB] (14 MBps) [2024-11-17T01:03:32.507Z] Copying: 145/1024 [MB] (25 MBps) [2024-11-17T01:03:33.446Z] Copying: 181/1024 [MB] (35 MBps) [2024-11-17T01:03:34.385Z] Copying: 199/1024 [MB] (18 MBps) [2024-11-17T01:03:35.322Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-17T01:03:36.266Z] Copying: 237/1024 [MB] (25 MBps) [2024-11-17T01:03:37.208Z] Copying: 254/1024 [MB] (17 MBps) [2024-11-17T01:03:38.150Z] Copying: 267/1024 [MB] (13 MBps) [2024-11-17T01:03:39.532Z] Copying: 280/1024 [MB] (12 MBps) [2024-11-17T01:03:40.474Z] Copying: 309/1024 [MB] (28 MBps) [2024-11-17T01:03:41.414Z] Copying: 323/1024 [MB] (14 MBps) [2024-11-17T01:03:42.356Z] Copying: 342/1024 [MB] (19 MBps) [2024-11-17T01:03:43.294Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-17T01:03:44.233Z] Copying: 382/1024 [MB] (23 MBps) [2024-11-17T01:03:45.169Z] Copying: 403/1024 [MB] (21 MBps) [2024-11-17T01:03:46.571Z] Copying: 443/1024 [MB] (39 MBps) [2024-11-17T01:03:47.180Z] Copying: 459/1024 [MB] (16 MBps) [2024-11-17T01:03:48.560Z] Copying: 482/1024 [MB] (22 MBps) [2024-11-17T01:03:49.132Z] Copying: 504/1024 [MB] (22 MBps) [2024-11-17T01:03:50.536Z] Copying: 534/1024 [MB] (29 MBps) [2024-11-17T01:03:51.479Z] Copying: 549/1024 [MB] (15 MBps) [2024-11-17T01:03:52.422Z] Copying: 563/1024 [MB] (13 MBps) [2024-11-17T01:03:53.367Z] Copying: 582/1024 [MB] (19 MBps) [2024-11-17T01:03:54.310Z] Copying: 596/1024 [MB] (14 MBps) [2024-11-17T01:03:55.250Z] Copying: 610/1024 [MB] (13 MBps) [2024-11-17T01:03:56.196Z] Copying: 653/1024 [MB] (42 MBps) [2024-11-17T01:03:57.141Z] Copying: 674/1024 [MB] (21 MBps) [2024-11-17T01:03:58.527Z] Copying: 694/1024 [MB] (19 MBps) [2024-11-17T01:03:59.471Z] Copying: 715/1024 [MB] (20 MBps) [2024-11-17T01:04:00.415Z] Copying: 734/1024 [MB] (18 MBps) [2024-11-17T01:04:01.360Z] Copying: 754/1024 [MB] (20 MBps) [2024-11-17T01:04:02.302Z] Copying: 775/1024 [MB] (20 MBps) [2024-11-17T01:04:03.246Z] Copying: 787/1024 [MB] (12 MBps) [2024-11-17T01:04:04.191Z] Copying: 808/1024 [MB] (20 MBps) [2024-11-17T01:04:05.134Z] Copying: 829/1024 [MB] (21 MBps) [2024-11-17T01:04:06.519Z] Copying: 850/1024 [MB] (21 MBps) [2024-11-17T01:04:07.462Z] Copying: 873/1024 [MB] (22 MBps) [2024-11-17T01:04:08.408Z] Copying: 891/1024 [MB] (18 MBps) [2024-11-17T01:04:09.351Z] Copying: 911/1024 [MB] (19 MBps) [2024-11-17T01:04:10.291Z] Copying: 931/1024 [MB] (20 MBps) [2024-11-17T01:04:11.239Z] Copying: 955/1024 [MB] (24 MBps) [2024-11-17T01:04:12.185Z] Copying: 976/1024 [MB] (20 MBps) [2024-11-17T01:04:13.281Z] Copying: 990/1024 [MB] (14 MBps) [2024-11-17T01:04:14.258Z] Copying: 1005/1024 [MB] (14 MBps) [2024-11-17T01:04:15.205Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-17T01:04:15.205Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-17 01:04:15.107650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.142 [2024-11-17 01:04:15.107757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:23.142 [2024-11-17 01:04:15.107776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:23.142 [2024-11-17 01:04:15.107786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.142 [2024-11-17 01:04:15.109965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:23.142 [2024-11-17 01:04:15.113197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.142 [2024-11-17 01:04:15.113389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:23.142 [2024-11-17 01:04:15.113412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:32:23.142 [2024-11-17 01:04:15.113422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.142 [2024-11-17 01:04:15.124806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.142 [2024-11-17 01:04:15.124996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:23.142 [2024-11-17 01:04:15.125038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.490 ms 00:32:23.142 [2024-11-17 01:04:15.125048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.142 [2024-11-17 01:04:15.125085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.142 [2024-11-17 01:04:15.125094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:23.142 [2024-11-17 01:04:15.125104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:23.142 [2024-11-17 01:04:15.125112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.142 [2024-11-17 01:04:15.125170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.142 [2024-11-17 01:04:15.125180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:23.142 [2024-11-17 01:04:15.125190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:23.142 [2024-11-17 01:04:15.125201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.142 [2024-11-17 01:04:15.125219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:23.142 [2024-11-17 01:04:15.125231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126976 / 261120 wr_cnt: 1 state: open 00:32:23.142 [2024-11-17 01:04:15.125242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:23.142 [2024-11-17 01:04:15.125325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:23.143 [2024-11-17 01:04:15.125968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.125975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.125984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.125992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:23.144 [2024-11-17 01:04:15.126100] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:23.144 [2024-11-17 01:04:15.126108] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dafe0f70-ba94-4e9f-86ae-6e462132ec15 00:32:23.144 [2024-11-17 01:04:15.126122] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126976 00:32:23.144 [2024-11-17 01:04:15.126129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127008 00:32:23.144 [2024-11-17 01:04:15.126137] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126976 00:32:23.144 [2024-11-17 01:04:15.126145] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:32:23.144 [2024-11-17 01:04:15.126153] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:23.144 [2024-11-17 01:04:15.126161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:23.144 [2024-11-17 01:04:15.126172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:23.144 [2024-11-17 01:04:15.126178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:23.144 [2024-11-17 01:04:15.126185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:23.144 [2024-11-17 01:04:15.126192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.144 [2024-11-17 01:04:15.126199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:23.144 [2024-11-17 01:04:15.126208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:32:23.144 [2024-11-17 01:04:15.126215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.128612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.144 [2024-11-17 01:04:15.128655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:23.144 [2024-11-17 01:04:15.128666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:32:23.144 [2024-11-17 01:04:15.128676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.128800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.144 [2024-11-17 01:04:15.128809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:23.144 [2024-11-17 01:04:15.128818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:32:23.144 [2024-11-17 01:04:15.128826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.135680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.135853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:23.144 [2024-11-17 01:04:15.135879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.135887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.135953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.135962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:23.144 [2024-11-17 01:04:15.135971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.135985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.136050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.136062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:23.144 [2024-11-17 01:04:15.136070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.136082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.136099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.136108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:23.144 [2024-11-17 01:04:15.136116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.136128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.149906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.149959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:23.144 [2024-11-17 01:04:15.149970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.149987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.161323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:23.144 [2024-11-17 01:04:15.161335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.161344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.161430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:23.144 [2024-11-17 01:04:15.161438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.161447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.161532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:23.144 [2024-11-17 01:04:15.161541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.161549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.161619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:23.144 [2024-11-17 01:04:15.161628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.161637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.161681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:23.144 [2024-11-17 01:04:15.161690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.161698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.161746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:23.144 [2024-11-17 01:04:15.161754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.161763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:23.144 [2024-11-17 01:04:15.161830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:23.144 [2024-11-17 01:04:15.161839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:23.144 [2024-11-17 01:04:15.161848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.144 [2024-11-17 01:04:15.161982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 55.586 ms, result 0 00:32:24.090 00:32:24.090 00:32:24.090 01:04:15 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:24.090 [2024-11-17 01:04:15.943635] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:24.090 [2024-11-17 01:04:15.943784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95685 ] 00:32:24.090 [2024-11-17 01:04:16.097776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.090 [2024-11-17 01:04:16.150944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.351 [2024-11-17 01:04:16.267133] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:24.351 [2024-11-17 01:04:16.267221] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:24.615 [2024-11-17 01:04:16.429040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.615 [2024-11-17 01:04:16.429281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:24.615 [2024-11-17 01:04:16.429312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.615 [2024-11-17 01:04:16.429323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.615 [2024-11-17 01:04:16.429433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.615 [2024-11-17 01:04:16.429446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.615 [2024-11-17 01:04:16.429456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:32:24.615 [2024-11-17 01:04:16.429464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.615 [2024-11-17 01:04:16.429487] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:24.615 [2024-11-17 01:04:16.429769] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:24.615 [2024-11-17 01:04:16.429791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.615 [2024-11-17 01:04:16.429801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.615 [2024-11-17 01:04:16.429811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:32:24.615 [2024-11-17 01:04:16.429823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.615 [2024-11-17 01:04:16.430130] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:24.615 [2024-11-17 01:04:16.430161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.615 [2024-11-17 01:04:16.430172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:24.615 [2024-11-17 01:04:16.430183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:24.615 [2024-11-17 01:04:16.430192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.615 [2024-11-17 01:04:16.430251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.615 [2024-11-17 01:04:16.430264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:24.615 [2024-11-17 01:04:16.430273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:24.615 [2024-11-17 01:04:16.430280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.615 [2024-11-17 01:04:16.430546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.615 [2024-11-17 01:04:16.430558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.615 [2024-11-17 01:04:16.430573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:32:24.615 [2024-11-17 01:04:16.430586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.615 [2024-11-17 01:04:16.430668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.616 [2024-11-17 01:04:16.430681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.616 [2024-11-17 01:04:16.430691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:24.616 [2024-11-17 01:04:16.430699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.616 [2024-11-17 01:04:16.430724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.616 [2024-11-17 01:04:16.430733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:24.616 [2024-11-17 01:04:16.430741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:24.616 [2024-11-17 01:04:16.430749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.616 [2024-11-17 01:04:16.430770] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:24.616 [2024-11-17 01:04:16.432935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.616 [2024-11-17 01:04:16.432993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.616 [2024-11-17 01:04:16.433005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:32:24.616 [2024-11-17 01:04:16.433014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.616 [2024-11-17 01:04:16.433050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.616 [2024-11-17 01:04:16.433059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:24.616 [2024-11-17 01:04:16.433069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:24.616 [2024-11-17 01:04:16.433078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.616 [2024-11-17 01:04:16.433141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:24.616 [2024-11-17 01:04:16.433164] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:24.616 [2024-11-17 01:04:16.433205] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:24.616 [2024-11-17 01:04:16.433222] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:24.616 [2024-11-17 01:04:16.433328] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:24.616 [2024-11-17 01:04:16.433341] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:24.616 [2024-11-17 01:04:16.433382] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:24.616 [2024-11-17 01:04:16.433395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433404] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:24.616 [2024-11-17 01:04:16.433423] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:24.616 [2024-11-17 01:04:16.433431] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:24.616 [2024-11-17 01:04:16.433439] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:24.616 [2024-11-17 01:04:16.433447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.616 [2024-11-17 01:04:16.433454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:24.616 [2024-11-17 01:04:16.433465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:32:24.616 [2024-11-17 01:04:16.433472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.616 [2024-11-17 01:04:16.433574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.616 [2024-11-17 01:04:16.433584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:24.616 [2024-11-17 01:04:16.433592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:24.616 [2024-11-17 01:04:16.433601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.616 [2024-11-17 01:04:16.433702] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:24.616 [2024-11-17 01:04:16.433713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:24.616 [2024-11-17 01:04:16.433721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:24.616 [2024-11-17 01:04:16.433750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:24.616 [2024-11-17 01:04:16.433773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.616 [2024-11-17 01:04:16.433789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:24.616 [2024-11-17 01:04:16.433797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:24.616 [2024-11-17 01:04:16.433806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.616 [2024-11-17 01:04:16.433813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:24.616 [2024-11-17 01:04:16.433821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:24.616 [2024-11-17 01:04:16.433828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:24.616 [2024-11-17 01:04:16.433842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:24.616 [2024-11-17 01:04:16.433864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:24.616 [2024-11-17 01:04:16.433885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:24.616 [2024-11-17 01:04:16.433909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:24.616 [2024-11-17 01:04:16.433930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.616 [2024-11-17 01:04:16.433944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:24.616 [2024-11-17 01:04:16.433951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:24.616 [2024-11-17 01:04:16.433957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.616 [2024-11-17 01:04:16.433964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:24.616 [2024-11-17 01:04:16.433970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:24.616 [2024-11-17 01:04:16.433977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.616 [2024-11-17 01:04:16.433984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:24.616 [2024-11-17 01:04:16.433991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:24.616 [2024-11-17 01:04:16.433997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.616 [2024-11-17 01:04:16.434004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:24.616 [2024-11-17 01:04:16.434015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:24.616 [2024-11-17 01:04:16.434023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.616 [2024-11-17 01:04:16.434030] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:24.616 [2024-11-17 01:04:16.434041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:24.616 [2024-11-17 01:04:16.434049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.616 [2024-11-17 01:04:16.434057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.616 [2024-11-17 01:04:16.434064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:24.616 [2024-11-17 01:04:16.434071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:24.616 [2024-11-17 01:04:16.434079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:24.616 [2024-11-17 01:04:16.434086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:24.616 [2024-11-17 01:04:16.434093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:24.616 [2024-11-17 01:04:16.434100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:24.616 [2024-11-17 01:04:16.434108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:24.616 [2024-11-17 01:04:16.434119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.617 [2024-11-17 01:04:16.434128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:24.617 [2024-11-17 01:04:16.434136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:24.617 [2024-11-17 01:04:16.434146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:24.617 [2024-11-17 01:04:16.434155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:24.617 [2024-11-17 01:04:16.434163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:24.617 [2024-11-17 01:04:16.434170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:24.617 [2024-11-17 01:04:16.434176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:24.617 [2024-11-17 01:04:16.434183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:24.617 [2024-11-17 01:04:16.434190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:24.617 [2024-11-17 01:04:16.434197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:24.617 [2024-11-17 01:04:16.434205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:24.617 [2024-11-17 01:04:16.434211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:24.617 [2024-11-17 01:04:16.434219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:24.617 [2024-11-17 01:04:16.434226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:24.617 [2024-11-17 01:04:16.434233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:24.617 [2024-11-17 01:04:16.434242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.617 [2024-11-17 01:04:16.434251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:24.617 [2024-11-17 01:04:16.434259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:24.617 [2024-11-17 01:04:16.434269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:24.617 [2024-11-17 01:04:16.434277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:24.617 [2024-11-17 01:04:16.434284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.434293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:24.617 [2024-11-17 01:04:16.434301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:32:24.617 [2024-11-17 01:04:16.434308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.452923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.453183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.617 [2024-11-17 01:04:16.453694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.571 ms 00:32:24.617 [2024-11-17 01:04:16.453757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.454066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.454586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:24.617 [2024-11-17 01:04:16.454854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:32:24.617 [2024-11-17 01:04:16.454927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.467321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.467538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.617 [2024-11-17 01:04:16.467608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.220 ms 00:32:24.617 [2024-11-17 01:04:16.467638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.467697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.467720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.617 [2024-11-17 01:04:16.467752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:24.617 [2024-11-17 01:04:16.467771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.467897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.467926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.617 [2024-11-17 01:04:16.468019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:24.617 [2024-11-17 01:04:16.468048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.468197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.468237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.617 [2024-11-17 01:04:16.468258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:32:24.617 [2024-11-17 01:04:16.468277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.475538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.475695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.617 [2024-11-17 01:04:16.475755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.228 ms 00:32:24.617 [2024-11-17 01:04:16.475787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.475953] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:24.617 [2024-11-17 01:04:16.475998] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:24.617 [2024-11-17 01:04:16.476203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.476233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:24.617 [2024-11-17 01:04:16.476270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:32:24.617 [2024-11-17 01:04:16.476295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.488726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.488889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:24.617 [2024-11-17 01:04:16.488947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.315 ms 00:32:24.617 [2024-11-17 01:04:16.488982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.489129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.489152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:24.617 [2024-11-17 01:04:16.489182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:24.617 [2024-11-17 01:04:16.489201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.489439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.489535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:24.617 [2024-11-17 01:04:16.489600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:24.617 [2024-11-17 01:04:16.489630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.489968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.490118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:24.617 [2024-11-17 01:04:16.490187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:32:24.617 [2024-11-17 01:04:16.490296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.490351] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:24.617 [2024-11-17 01:04:16.490450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.490551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:24.617 [2024-11-17 01:04:16.490575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:24.617 [2024-11-17 01:04:16.490589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.500036] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:24.617 [2024-11-17 01:04:16.500214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.500225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:24.617 [2024-11-17 01:04:16.500236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.598 ms 00:32:24.617 [2024-11-17 01:04:16.500243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.502870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.502906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:24.617 [2024-11-17 01:04:16.502916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:32:24.617 [2024-11-17 01:04:16.502923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.617 [2024-11-17 01:04:16.503010] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:24.617 [2024-11-17 01:04:16.503628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.617 [2024-11-17 01:04:16.503649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:24.618 [2024-11-17 01:04:16.503659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:32:24.618 [2024-11-17 01:04:16.503668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.618 [2024-11-17 01:04:16.503704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.618 [2024-11-17 01:04:16.503713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:24.618 [2024-11-17 01:04:16.503726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.618 [2024-11-17 01:04:16.503734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.618 [2024-11-17 01:04:16.503770] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:24.618 [2024-11-17 01:04:16.503784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.618 [2024-11-17 01:04:16.503792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:24.618 [2024-11-17 01:04:16.503800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:24.618 [2024-11-17 01:04:16.503808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.618 [2024-11-17 01:04:16.510275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.618 [2024-11-17 01:04:16.510485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:24.618 [2024-11-17 01:04:16.510506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.448 ms 00:32:24.618 [2024-11-17 01:04:16.510515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.618 [2024-11-17 01:04:16.510597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.618 [2024-11-17 01:04:16.510608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:24.618 [2024-11-17 01:04:16.510617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:24.618 [2024-11-17 01:04:16.510624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.618 [2024-11-17 01:04:16.511847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 82.323 ms, result 0 00:32:26.011  [2024-11-17T01:04:19.021Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T01:04:19.967Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-17T01:04:20.908Z] Copying: 31/1024 [MB] (11 MBps) [2024-11-17T01:04:21.854Z] Copying: 55/1024 [MB] (23 MBps) [2024-11-17T01:04:22.799Z] Copying: 74/1024 [MB] (19 MBps) [2024-11-17T01:04:23.743Z] Copying: 95/1024 [MB] (21 MBps) [2024-11-17T01:04:25.130Z] Copying: 114/1024 [MB] (18 MBps) [2024-11-17T01:04:26.075Z] Copying: 139/1024 [MB] (25 MBps) [2024-11-17T01:04:27.021Z] Copying: 160/1024 [MB] (20 MBps) [2024-11-17T01:04:27.969Z] Copying: 180/1024 [MB] (20 MBps) [2024-11-17T01:04:28.916Z] Copying: 200/1024 [MB] (20 MBps) [2024-11-17T01:04:29.861Z] Copying: 216/1024 [MB] (15 MBps) [2024-11-17T01:04:30.805Z] Copying: 234/1024 [MB] (17 MBps) [2024-11-17T01:04:31.748Z] Copying: 253/1024 [MB] (19 MBps) [2024-11-17T01:04:33.139Z] Copying: 276/1024 [MB] (22 MBps) [2024-11-17T01:04:33.712Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-17T01:04:35.100Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-17T01:04:36.044Z] Copying: 310/1024 [MB] (10 MBps) [2024-11-17T01:04:36.988Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-17T01:04:37.933Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-17T01:04:38.879Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-17T01:04:39.821Z] Copying: 352/1024 [MB] (10 MBps) [2024-11-17T01:04:40.766Z] Copying: 363/1024 [MB] (10 MBps) [2024-11-17T01:04:41.714Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-17T01:04:43.103Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-17T01:04:44.050Z] Copying: 394/1024 [MB] (10 MBps) [2024-11-17T01:04:44.995Z] Copying: 405/1024 [MB] (10 MBps) [2024-11-17T01:04:45.940Z] Copying: 416/1024 [MB] (10 MBps) [2024-11-17T01:04:46.886Z] Copying: 427/1024 [MB] (10 MBps) [2024-11-17T01:04:47.830Z] Copying: 438/1024 [MB] (10 MBps) [2024-11-17T01:04:48.776Z] Copying: 449/1024 [MB] (10 MBps) [2024-11-17T01:04:49.737Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-17T01:04:51.128Z] Copying: 471/1024 [MB] (10 MBps) [2024-11-17T01:04:51.802Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-17T01:04:52.748Z] Copying: 505/1024 [MB] (24 MBps) [2024-11-17T01:04:54.132Z] Copying: 519/1024 [MB] (13 MBps) [2024-11-17T01:04:55.077Z] Copying: 539/1024 [MB] (20 MBps) [2024-11-17T01:04:56.023Z] Copying: 560/1024 [MB] (20 MBps) [2024-11-17T01:04:56.970Z] Copying: 575/1024 [MB] (15 MBps) [2024-11-17T01:04:57.916Z] Copying: 586/1024 [MB] (11 MBps) [2024-11-17T01:04:58.862Z] Copying: 598/1024 [MB] (11 MBps) [2024-11-17T01:04:59.805Z] Copying: 614/1024 [MB] (15 MBps) [2024-11-17T01:05:00.752Z] Copying: 632/1024 [MB] (18 MBps) [2024-11-17T01:05:02.140Z] Copying: 652/1024 [MB] (19 MBps) [2024-11-17T01:05:02.713Z] Copying: 668/1024 [MB] (15 MBps) [2024-11-17T01:05:04.104Z] Copying: 689/1024 [MB] (21 MBps) [2024-11-17T01:05:05.048Z] Copying: 702/1024 [MB] (12 MBps) [2024-11-17T01:05:05.994Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-17T01:05:06.939Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-17T01:05:07.882Z] Copying: 736/1024 [MB] (12 MBps) [2024-11-17T01:05:08.826Z] Copying: 749/1024 [MB] (12 MBps) [2024-11-17T01:05:09.773Z] Copying: 761/1024 [MB] (12 MBps) [2024-11-17T01:05:10.716Z] Copying: 772/1024 [MB] (10 MBps) [2024-11-17T01:05:12.104Z] Copying: 788/1024 [MB] (15 MBps) [2024-11-17T01:05:13.046Z] Copying: 800/1024 [MB] (12 MBps) [2024-11-17T01:05:13.992Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-17T01:05:14.938Z] Copying: 827/1024 [MB] (14 MBps) [2024-11-17T01:05:15.883Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-17T01:05:16.824Z] Copying: 848/1024 [MB] (10 MBps) [2024-11-17T01:05:17.769Z] Copying: 862/1024 [MB] (14 MBps) [2024-11-17T01:05:18.714Z] Copying: 874/1024 [MB] (12 MBps) [2024-11-17T01:05:20.097Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-17T01:05:21.107Z] Copying: 898/1024 [MB] (12 MBps) [2024-11-17T01:05:22.051Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-17T01:05:22.996Z] Copying: 927/1024 [MB] (18 MBps) [2024-11-17T01:05:23.941Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-17T01:05:24.887Z] Copying: 953/1024 [MB] (14 MBps) [2024-11-17T01:05:25.831Z] Copying: 964/1024 [MB] (11 MBps) [2024-11-17T01:05:26.774Z] Copying: 977/1024 [MB] (12 MBps) [2024-11-17T01:05:27.718Z] Copying: 989/1024 [MB] (12 MBps) [2024-11-17T01:05:29.106Z] Copying: 1007/1024 [MB] (18 MBps) [2024-11-17T01:05:29.364Z] Copying: 1018/1024 [MB] (10 MBps) [2024-11-17T01:05:29.626Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-17 01:05:29.372030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.563 [2024-11-17 01:05:29.372142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:37.563 [2024-11-17 01:05:29.372171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:37.563 [2024-11-17 01:05:29.372184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.563 [2024-11-17 01:05:29.372217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:37.563 [2024-11-17 01:05:29.373117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.563 [2024-11-17 01:05:29.373151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:37.563 [2024-11-17 01:05:29.373173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:33:37.563 [2024-11-17 01:05:29.373184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.563 [2024-11-17 01:05:29.373897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.563 [2024-11-17 01:05:29.373926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:37.563 [2024-11-17 01:05:29.373946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:33:37.563 [2024-11-17 01:05:29.373957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.563 [2024-11-17 01:05:29.373998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.563 [2024-11-17 01:05:29.374011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:37.563 [2024-11-17 01:05:29.374023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:37.563 [2024-11-17 01:05:29.374034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.563 [2024-11-17 01:05:29.374111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.563 [2024-11-17 01:05:29.374125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:37.563 [2024-11-17 01:05:29.374141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:37.563 [2024-11-17 01:05:29.374160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.563 [2024-11-17 01:05:29.374181] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:37.563 [2024-11-17 01:05:29.374199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:37.563 [2024-11-17 01:05:29.374213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.374992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:37.564 [2024-11-17 01:05:29.375108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:37.565 [2024-11-17 01:05:29.375530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:37.565 [2024-11-17 01:05:29.375546] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dafe0f70-ba94-4e9f-86ae-6e462132ec15 00:33:37.565 [2024-11-17 01:05:29.375558] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:37.565 [2024-11-17 01:05:29.375570] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4128 00:33:37.565 [2024-11-17 01:05:29.375587] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4096 00:33:37.565 [2024-11-17 01:05:29.375600] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:33:37.565 [2024-11-17 01:05:29.375611] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:37.565 [2024-11-17 01:05:29.375625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:37.565 [2024-11-17 01:05:29.375641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:37.565 [2024-11-17 01:05:29.375651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:37.565 [2024-11-17 01:05:29.375661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:37.565 [2024-11-17 01:05:29.375672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.565 [2024-11-17 01:05:29.375683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:37.565 [2024-11-17 01:05:29.375695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:33:37.565 [2024-11-17 01:05:29.375705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.565 [2024-11-17 01:05:29.378558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.565 [2024-11-17 01:05:29.378596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:37.565 [2024-11-17 01:05:29.378611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.830 ms 00:33:37.565 [2024-11-17 01:05:29.378623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.565 [2024-11-17 01:05:29.378746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.565 [2024-11-17 01:05:29.378762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:37.565 [2024-11-17 01:05:29.378771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:33:37.565 [2024-11-17 01:05:29.378778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.565 [2024-11-17 01:05:29.385393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.565 [2024-11-17 01:05:29.385432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:37.565 [2024-11-17 01:05:29.385448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.565 [2024-11-17 01:05:29.385456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.565 [2024-11-17 01:05:29.385522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.565 [2024-11-17 01:05:29.385531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:37.565 [2024-11-17 01:05:29.385539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.565 [2024-11-17 01:05:29.385547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.565 [2024-11-17 01:05:29.385605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.565 [2024-11-17 01:05:29.385616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:37.565 [2024-11-17 01:05:29.385624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.565 [2024-11-17 01:05:29.385635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.565 [2024-11-17 01:05:29.385653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.565 [2024-11-17 01:05:29.385661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:37.566 [2024-11-17 01:05:29.385669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.385677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.399436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.399481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:37.566 [2024-11-17 01:05:29.399500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.399509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.411323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:37.566 [2024-11-17 01:05:29.411335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.411344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.411433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:37.566 [2024-11-17 01:05:29.411443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.411451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.411504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:37.566 [2024-11-17 01:05:29.411538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.411547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.411618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:37.566 [2024-11-17 01:05:29.411627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.411635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.411686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:37.566 [2024-11-17 01:05:29.411695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.411704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.411755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:37.566 [2024-11-17 01:05:29.411764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.411771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.566 [2024-11-17 01:05:29.411831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:37.566 [2024-11-17 01:05:29.411840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.566 [2024-11-17 01:05:29.411848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.566 [2024-11-17 01:05:29.411989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.925 ms, result 0 00:33:37.826 00:33:37.826 00:33:37.826 01:05:29 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:40.371 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:40.371 01:05:31 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:40.371 01:05:31 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:40.371 01:05:31 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:40.371 Process with pid 93456 is not found 00:33:40.371 Remove shared memory files 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93456 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93456 ']' 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93456 00:33:40.371 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93456) - No such process 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93456 is not found' 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_band_md /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_l2p_l1 /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_l2p_l2 /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_l2p_l2_ctx /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_nvc_md /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_p2l_pool /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_sb /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_sb_shm /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_trim_bitmap /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_trim_log /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_trim_md /dev/hugepages/ftl_dafe0f70-ba94-4e9f-86ae-6e462132ec15_vmap 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:40.371 ************************************ 00:33:40.371 END TEST ftl_restore_fast 00:33:40.371 ************************************ 00:33:40.371 00:33:40.371 real 4m56.992s 00:33:40.371 user 4m44.207s 00:33:40.371 sys 0m12.194s 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:40.371 01:05:32 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:40.371 01:05:32 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:40.371 01:05:32 ftl -- ftl/ftl.sh@14 -- # killprocess 84274 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@950 -- # '[' -z 84274 ']' 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@954 -- # kill -0 84274 00:33:40.371 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84274) - No such process 00:33:40.371 Process with pid 84274 is not found 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84274 is not found' 00:33:40.371 01:05:32 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:40.371 01:05:32 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96482 00:33:40.371 01:05:32 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96482 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@831 -- # '[' -z 96482 ']' 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:40.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:40.371 01:05:32 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:40.371 01:05:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:40.371 [2024-11-17 01:05:32.143411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:40.371 [2024-11-17 01:05:32.143656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96482 ] 00:33:40.371 [2024-11-17 01:05:32.292539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.371 [2024-11-17 01:05:32.326915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.944 01:05:32 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:40.944 01:05:32 ftl -- common/autotest_common.sh@864 -- # return 0 00:33:40.944 01:05:32 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:41.206 nvme0n1 00:33:41.206 01:05:33 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:41.206 01:05:33 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:41.206 01:05:33 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:41.468 01:05:33 ftl -- ftl/common.sh@28 -- # stores=6c62c57d-bb29-4e68-9082-15c9fe239791 00:33:41.468 01:05:33 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:41.468 01:05:33 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c62c57d-bb29-4e68-9082-15c9fe239791 00:33:41.730 01:05:33 ftl -- ftl/ftl.sh@23 -- # killprocess 96482 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 96482 ']' 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@954 -- # kill -0 96482 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@955 -- # uname 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96482 00:33:41.730 killing process with pid 96482 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96482' 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@969 -- # kill 96482 00:33:41.730 01:05:33 ftl -- common/autotest_common.sh@974 -- # wait 96482 00:33:41.991 01:05:34 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:42.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:42.252 Waiting for block devices as requested 00:33:42.252 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:42.513 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:42.513 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:42.774 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:48.066 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:48.066 01:05:39 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:48.066 Remove shared memory files 00:33:48.066 01:05:39 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:48.066 01:05:39 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:48.066 01:05:39 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:48.066 01:05:39 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:48.066 01:05:39 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:48.066 01:05:39 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:48.066 ************************************ 00:33:48.066 END TEST ftl 00:33:48.066 ************************************ 00:33:48.066 00:33:48.066 real 18m29.346s 00:33:48.066 user 20m29.696s 00:33:48.066 sys 1m34.552s 00:33:48.066 01:05:39 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.066 01:05:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:48.066 01:05:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:48.066 01:05:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:48.066 01:05:39 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:48.066 01:05:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:48.066 01:05:39 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:48.066 01:05:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:48.066 01:05:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:48.066 01:05:39 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:48.066 01:05:39 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:48.066 01:05:39 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:48.066 01:05:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.066 01:05:39 -- common/autotest_common.sh@10 -- # set +x 00:33:48.066 01:05:39 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:48.066 01:05:39 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:48.066 01:05:39 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:48.066 01:05:39 -- common/autotest_common.sh@10 -- # set +x 00:33:49.008 INFO: APP EXITING 00:33:49.008 INFO: killing all VMs 00:33:49.008 INFO: killing vhost app 00:33:49.008 INFO: EXIT DONE 00:33:49.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:49.843 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:49.843 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:49.843 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:49.843 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:50.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:50.678 Cleaning 00:33:50.678 Removing: /var/run/dpdk/spdk0/config 00:33:50.678 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:50.678 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:50.678 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:50.678 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:50.679 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:50.679 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:50.679 Removing: /var/run/dpdk/spdk0 00:33:50.679 Removing: /var/run/dpdk/spdk_pid69674 00:33:50.679 Removing: /var/run/dpdk/spdk_pid69843 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70039 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70127 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70155 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70261 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70279 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70462 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70535 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70620 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70720 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70806 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70840 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70877 00:33:50.679 Removing: /var/run/dpdk/spdk_pid70947 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71053 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71478 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71526 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71572 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71583 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71646 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71657 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71726 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71736 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71784 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71802 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71844 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71862 00:33:50.679 Removing: /var/run/dpdk/spdk_pid71989 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72020 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72109 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72270 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72332 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72363 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72785 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72878 00:33:50.679 Removing: /var/run/dpdk/spdk_pid72978 00:33:50.679 Removing: /var/run/dpdk/spdk_pid73033 00:33:50.679 Removing: /var/run/dpdk/spdk_pid73054 00:33:50.679 Removing: /var/run/dpdk/spdk_pid73138 00:33:50.679 Removing: /var/run/dpdk/spdk_pid73754 00:33:50.679 Removing: /var/run/dpdk/spdk_pid73780 00:33:50.679 Removing: /var/run/dpdk/spdk_pid74246 00:33:50.679 Removing: /var/run/dpdk/spdk_pid74339 00:33:50.679 Removing: /var/run/dpdk/spdk_pid74437 00:33:50.679 Removing: /var/run/dpdk/spdk_pid74479 00:33:50.679 Removing: /var/run/dpdk/spdk_pid74510 00:33:50.679 Removing: /var/run/dpdk/spdk_pid74530 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76357 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76483 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76487 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76499 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76550 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76554 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76566 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76611 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76615 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76627 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76672 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76676 00:33:50.679 Removing: /var/run/dpdk/spdk_pid76688 00:33:50.679 Removing: /var/run/dpdk/spdk_pid78055 00:33:50.679 Removing: /var/run/dpdk/spdk_pid78147 00:33:50.679 Removing: /var/run/dpdk/spdk_pid79540 00:33:50.679 Removing: /var/run/dpdk/spdk_pid80904 00:33:50.679 Removing: /var/run/dpdk/spdk_pid80958 00:33:50.679 Removing: /var/run/dpdk/spdk_pid81012 00:33:50.679 Removing: /var/run/dpdk/spdk_pid81066 00:33:50.679 Removing: /var/run/dpdk/spdk_pid81142 00:33:50.679 Removing: /var/run/dpdk/spdk_pid81205 00:33:50.679 Removing: /var/run/dpdk/spdk_pid81342 00:33:50.679 Removing: /var/run/dpdk/spdk_pid81696 00:33:50.679 Removing: /var/run/dpdk/spdk_pid81716 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82162 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82336 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82424 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82528 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82565 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82590 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82877 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82915 00:33:50.942 Removing: /var/run/dpdk/spdk_pid82971 00:33:50.942 Removing: /var/run/dpdk/spdk_pid83337 00:33:50.942 Removing: /var/run/dpdk/spdk_pid83481 00:33:50.942 Removing: /var/run/dpdk/spdk_pid84274 00:33:50.942 Removing: /var/run/dpdk/spdk_pid84392 00:33:50.942 Removing: /var/run/dpdk/spdk_pid84549 00:33:50.942 Removing: /var/run/dpdk/spdk_pid84645 00:33:50.942 Removing: /var/run/dpdk/spdk_pid84936 00:33:50.942 Removing: /var/run/dpdk/spdk_pid85191 00:33:50.942 Removing: /var/run/dpdk/spdk_pid85543 00:33:50.942 Removing: /var/run/dpdk/spdk_pid85698 00:33:50.942 Removing: /var/run/dpdk/spdk_pid85881 00:33:50.943 Removing: /var/run/dpdk/spdk_pid85918 00:33:50.943 Removing: /var/run/dpdk/spdk_pid86148 00:33:50.943 Removing: /var/run/dpdk/spdk_pid86162 00:33:50.943 Removing: /var/run/dpdk/spdk_pid86198 00:33:50.943 Removing: /var/run/dpdk/spdk_pid86490 00:33:50.943 Removing: /var/run/dpdk/spdk_pid86699 00:33:50.943 Removing: /var/run/dpdk/spdk_pid87299 00:33:50.943 Removing: /var/run/dpdk/spdk_pid88115 00:33:50.943 Removing: /var/run/dpdk/spdk_pid88855 00:33:50.943 Removing: /var/run/dpdk/spdk_pid89693 00:33:50.943 Removing: /var/run/dpdk/spdk_pid89829 00:33:50.943 Removing: /var/run/dpdk/spdk_pid89905 00:33:50.943 Removing: /var/run/dpdk/spdk_pid90585 00:33:50.943 Removing: /var/run/dpdk/spdk_pid90640 00:33:50.943 Removing: /var/run/dpdk/spdk_pid91293 00:33:50.943 Removing: /var/run/dpdk/spdk_pid91788 00:33:50.943 Removing: /var/run/dpdk/spdk_pid92520 00:33:50.943 Removing: /var/run/dpdk/spdk_pid92644 00:33:50.943 Removing: /var/run/dpdk/spdk_pid92677 00:33:50.943 Removing: /var/run/dpdk/spdk_pid92735 00:33:50.943 Removing: /var/run/dpdk/spdk_pid92784 00:33:50.943 Removing: /var/run/dpdk/spdk_pid92838 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93038 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93107 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93167 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93226 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93255 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93317 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93456 00:33:50.943 Removing: /var/run/dpdk/spdk_pid93670 00:33:50.943 Removing: /var/run/dpdk/spdk_pid94338 00:33:50.943 Removing: /var/run/dpdk/spdk_pid95129 00:33:50.943 Removing: /var/run/dpdk/spdk_pid95685 00:33:50.943 Removing: /var/run/dpdk/spdk_pid96482 00:33:50.943 Clean 00:33:50.943 01:05:42 -- common/autotest_common.sh@1451 -- # return 0 00:33:50.943 01:05:42 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:50.943 01:05:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.943 01:05:42 -- common/autotest_common.sh@10 -- # set +x 00:33:50.943 01:05:42 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:50.943 01:05:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.943 01:05:42 -- common/autotest_common.sh@10 -- # set +x 00:33:51.205 01:05:43 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:51.205 01:05:43 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:51.205 01:05:43 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:51.205 01:05:43 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:51.205 01:05:43 -- spdk/autotest.sh@394 -- # hostname 00:33:51.205 01:05:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:51.205 geninfo: WARNING: invalid characters removed from testname! 00:34:17.800 01:06:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:19.717 01:06:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:22.264 01:06:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:24.813 01:06:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:27.418 01:06:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:29.965 01:06:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:32.514 01:06:24 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:32.514 01:06:24 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:34:32.514 01:06:24 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:34:32.514 01:06:24 -- common/autotest_common.sh@1681 -- $ lcov --version 00:34:32.514 01:06:24 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:34:32.514 01:06:24 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:32.514 01:06:24 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:32.514 01:06:24 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:32.514 01:06:24 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:32.514 01:06:24 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:32.514 01:06:24 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:32.514 01:06:24 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:32.514 01:06:24 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:32.514 01:06:24 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:32.514 01:06:24 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:32.514 01:06:24 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:32.514 01:06:24 -- scripts/common.sh@344 -- $ case "$op" in 00:34:32.514 01:06:24 -- scripts/common.sh@345 -- $ : 1 00:34:32.514 01:06:24 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:32.514 01:06:24 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.514 01:06:24 -- scripts/common.sh@365 -- $ decimal 1 00:34:32.514 01:06:24 -- scripts/common.sh@353 -- $ local d=1 00:34:32.514 01:06:24 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:32.514 01:06:24 -- scripts/common.sh@355 -- $ echo 1 00:34:32.514 01:06:24 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:32.514 01:06:24 -- scripts/common.sh@366 -- $ decimal 2 00:34:32.514 01:06:24 -- scripts/common.sh@353 -- $ local d=2 00:34:32.514 01:06:24 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:32.514 01:06:24 -- scripts/common.sh@355 -- $ echo 2 00:34:32.514 01:06:24 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:32.514 01:06:24 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:32.514 01:06:24 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:32.514 01:06:24 -- scripts/common.sh@368 -- $ return 0 00:34:32.514 01:06:24 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.514 01:06:24 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:34:32.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.514 --rc genhtml_branch_coverage=1 00:34:32.514 --rc genhtml_function_coverage=1 00:34:32.514 --rc genhtml_legend=1 00:34:32.514 --rc geninfo_all_blocks=1 00:34:32.514 --rc geninfo_unexecuted_blocks=1 00:34:32.514 00:34:32.514 ' 00:34:32.514 01:06:24 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:34:32.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.514 --rc genhtml_branch_coverage=1 00:34:32.514 --rc genhtml_function_coverage=1 00:34:32.514 --rc genhtml_legend=1 00:34:32.514 --rc geninfo_all_blocks=1 00:34:32.514 --rc geninfo_unexecuted_blocks=1 00:34:32.514 00:34:32.514 ' 00:34:32.514 01:06:24 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:34:32.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.514 --rc genhtml_branch_coverage=1 00:34:32.514 --rc genhtml_function_coverage=1 00:34:32.514 --rc genhtml_legend=1 00:34:32.514 --rc geninfo_all_blocks=1 00:34:32.515 --rc geninfo_unexecuted_blocks=1 00:34:32.515 00:34:32.515 ' 00:34:32.515 01:06:24 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:34:32.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.515 --rc genhtml_branch_coverage=1 00:34:32.515 --rc genhtml_function_coverage=1 00:34:32.515 --rc genhtml_legend=1 00:34:32.515 --rc geninfo_all_blocks=1 00:34:32.515 --rc geninfo_unexecuted_blocks=1 00:34:32.515 00:34:32.515 ' 00:34:32.515 01:06:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:32.515 01:06:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:32.515 01:06:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:32.515 01:06:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.515 01:06:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.515 01:06:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.515 01:06:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.515 01:06:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.515 01:06:24 -- paths/export.sh@5 -- $ export PATH 00:34:32.515 01:06:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.515 01:06:24 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:32.515 01:06:24 -- common/autobuild_common.sh@479 -- $ date +%s 00:34:32.515 01:06:24 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731805584.XXXXXX 00:34:32.515 01:06:24 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731805584.pSS5Mm 00:34:32.515 01:06:24 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:34:32.515 01:06:24 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:34:32.515 01:06:24 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:34:32.515 01:06:24 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:34:32.515 01:06:24 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:32.515 01:06:24 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:32.515 01:06:24 -- common/autobuild_common.sh@495 -- $ get_config_params 00:34:32.515 01:06:24 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:32.515 01:06:24 -- common/autotest_common.sh@10 -- $ set +x 00:34:32.515 01:06:24 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:34:32.515 01:06:24 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:34:32.515 01:06:24 -- pm/common@17 -- $ local monitor 00:34:32.515 01:06:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:32.515 01:06:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:32.515 01:06:24 -- pm/common@25 -- $ sleep 1 00:34:32.515 01:06:24 -- pm/common@21 -- $ date +%s 00:34:32.515 01:06:24 -- pm/common@21 -- $ date +%s 00:34:32.515 01:06:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731805584 00:34:32.515 01:06:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731805584 00:34:32.515 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731805584_collect-cpu-load.pm.log 00:34:32.515 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731805584_collect-vmstat.pm.log 00:34:33.456 01:06:25 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:34:33.456 01:06:25 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:33.456 01:06:25 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:33.456 01:06:25 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:33.456 01:06:25 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:33.456 01:06:25 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:33.456 01:06:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:33.456 01:06:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:33.456 01:06:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:33.457 01:06:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.457 01:06:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:33.457 01:06:25 -- pm/common@44 -- $ pid=98158 00:34:33.457 01:06:25 -- pm/common@50 -- $ kill -TERM 98158 00:34:33.457 01:06:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.457 01:06:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:33.457 01:06:25 -- pm/common@44 -- $ pid=98159 00:34:33.457 01:06:25 -- pm/common@50 -- $ kill -TERM 98159 00:34:33.457 + [[ -n 5772 ]] 00:34:33.457 + sudo kill 5772 00:34:33.467 [Pipeline] } 00:34:33.482 [Pipeline] // timeout 00:34:33.488 [Pipeline] } 00:34:33.504 [Pipeline] // stage 00:34:33.509 [Pipeline] } 00:34:33.523 [Pipeline] // catchError 00:34:33.532 [Pipeline] stage 00:34:33.534 [Pipeline] { (Stop VM) 00:34:33.545 [Pipeline] sh 00:34:33.828 + vagrant halt 00:34:36.369 ==> default: Halting domain... 00:34:42.971 [Pipeline] sh 00:34:43.254 + vagrant destroy -f 00:34:45.800 ==> default: Removing domain... 00:34:46.395 [Pipeline] sh 00:34:46.675 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:46.685 [Pipeline] } 00:34:46.698 [Pipeline] // stage 00:34:46.703 [Pipeline] } 00:34:46.717 [Pipeline] // dir 00:34:46.722 [Pipeline] } 00:34:46.735 [Pipeline] // wrap 00:34:46.740 [Pipeline] } 00:34:46.752 [Pipeline] // catchError 00:34:46.760 [Pipeline] stage 00:34:46.762 [Pipeline] { (Epilogue) 00:34:46.774 [Pipeline] sh 00:34:47.059 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:52.351 [Pipeline] catchError 00:34:52.353 [Pipeline] { 00:34:52.367 [Pipeline] sh 00:34:52.653 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:52.653 Artifacts sizes are good 00:34:52.664 [Pipeline] } 00:34:52.678 [Pipeline] // catchError 00:34:52.689 [Pipeline] archiveArtifacts 00:34:52.697 Archiving artifacts 00:34:52.841 [Pipeline] cleanWs 00:34:52.860 [WS-CLEANUP] Deleting project workspace... 00:34:52.860 [WS-CLEANUP] Deferred wipeout is used... 00:34:52.883 [WS-CLEANUP] done 00:34:52.885 [Pipeline] } 00:34:52.900 [Pipeline] // stage 00:34:52.905 [Pipeline] } 00:34:52.918 [Pipeline] // node 00:34:52.924 [Pipeline] End of Pipeline 00:34:52.965 Finished: SUCCESS